How to smooth data of increasing noise - r
Chemist here (so not very good with statistical analysis) and novice in R:
I have various sets of data where the yield of a reaction is monitored with time such as:
The data:
df <- structure(list(time = c(15, 30, 45, 60, 75, 90, 105, 120, 135,
150, 165, 180, 195, 210, 225, 240, 255, 270, 285, 300, 315, 330,
345, 360, 375, 390, 405, 420, 435, 450, 465, 480, 495, 510, 525,
540, 555, 570, 585, 600, 615, 630, 645, 660, 675, 690, 705, 720,
735, 750, 765, 780, 795, 810, 825, 840, 855, 870, 885, 900, 915,
930, 945, 960, 975, 990, 1005, 1020, 1035, 1050, 1065, 1080,
1095, 1110, 1125, 1140, 1155, 1170, 1185, 1200, 1215, 1230, 1245,
1260, 1275, 1290, 1305, 1320, 1335, 1350, 1365, 1380, 1395, 1410,
1425, 1440, 1455, 1470, 1485, 1500, 1515, 1530, 1545, 1560, 1575,
1590, 1605, 1620, 1635, 1650, 1665, 1680, 1695, 1710, 1725, 1740,
1755, 1770, 1785, 1800, 1815, 1830, 1845, 1860, 1875, 1890, 1905,
1920, 1935, 1950, 1965, 1980, 1995, 2010, 2025, 2040, 2055, 2070,
2085, 2100, 2115, 2130), yield = c(9.3411, 9.32582, 10.5475,
13.5358, 17.3376, 16.7444, 20.7234, 19.8374, 24.327, 27.4162,
27.38, 31.3926, 29.3289, 32.2556, 33.0025, 35.3358, 35.8986,
40.1859, 40.3886, 42.2828, 41.23, 43.8108, 43.9391, 43.9543,
48.0524, 47.8295, 48.674, 48.2456, 50.2641, 50.7147, 49.6828,
52.8877, 51.7906, 57.2553, 53.6175, 57.0186, 57.6598, 56.4049,
57.1446, 58.5464, 60.7213, 61.0584, 57.7481, 59.9151, 64.475,
61.2322, 63.5167, 64.6289, 64.4245, 62.0048, 65.5821, 65.8275,
65.7584, 68.0523, 65.4874, 68.401, 68.1503, 67.8713, 69.5478,
69.9774, 73.4199, 66.7266, 70.4732, 67.5119, 69.6107, 70.4911,
72.7592, 69.3821, 72.049, 70.2548, 71.6336, 70.6215, 70.8611,
72.0337, 72.2842, 76.0792, 75.2526, 72.7016, 73.6547, 75.6202,
76.5013, 74.2459, 76.033, 78.4803, 76.3058, 73.837, 74.795, 76.2126,
75.1816, 75.3594, 79.9158, 77.8157, 77.8152, 75.3712, 78.3249,
79.1198, 77.6184, 78.1244, 78.1741, 77.9305, 79.7576, 78.0261,
79.8136, 75.5314, 80.2177, 79.786, 81.078, 78.4183, 80.8013,
79.3855, 81.5268, 78.416, 78.9021, 79.9394, 80.8221, 81.241,
80.6111, 79.7504, 81.6001, 80.7021, 81.1008, 82.843, 82.2716,
83.024, 81.0381, 80.0248, 85.1418, 83.1229, 83.3334, 83.2149,
84.836, 79.5156, 81.909, 81.1477, 85.1715, 83.7502, 83.8336,
83.7595, 86.0062, 84.9572, 86.6709, 84.4124)), .Names = c("time",
"yield"), row.names = c(NA, -142L), class = "data.frame")
What i want to do to the data:
I need to smooth the data in order to plot the 1st derivative. In the paper the author mentioned that one can fit a high order polynomial and use that to do the processing which i think is wrong since we dont really know the true relationship between time and yield for the data and is definitely not polyonymic. I tried regardless and the plot of the derivative did not make any chemical sense as expected. Next i looked into loess using: loes<-loess(Yield~Time,data=df,span=0.9) which gave a much better fit. However, the best results so far was using :
spl <- smooth.spline(df$Time, y=df$Yield,cv=TRUE)
colnames(predspl)<-c('Time','Yield')
pred.der<-as.data.frame(predict(spl, deriv=1))
colnames(pred.der)<-c('Time', 'Yield')
which gave the best fit especially in the initial data points (by visual inspection).
The problem i have:
The issue however is that the derivative looks really good only up to t=500s and then it starts wiggling more and more towards the end. This shouldnt happen from a chemistry point of view and it is just a result of overfitting towards the end of the data due to the increase of the noise. I know this since for some experiments that i have performed 3 times and averaged the data (so the noise decreased) the wiggling is much smaller in the plot of the derivative.
What i have tried so far:
I tried different values of spar which although it smoothens correctly the later data it causes a poor fit in the initial data (which are the most important). I also tried to reduce the number of knots but i got a similar result with the one from changing the spar value. What i think i need is to have a larger amount of knots in the begining which will smoothly decrease to a small number of knots towards the end to avoid that overfitting.
The question:
Is my reasoning correct here? Does anyone know how can i have the above effect in order to get a smooth derivative without any wiggling? Do i need to try a different fit other than the spline maybe? I have attached a pic in the end where you can see the derivative from the smooth.spline vs time and a black line (drawn by hand) of what it should look like. Thank you for your help in advance.
I think you're on the right track on having more closely spaced knots for the spline at the start of the curve. You can specify knot locations for smooth.spline using all.knots (at least on R >= 3.4.3; I skimmed the release notes for R, but couldn't pinpoint the version where this became available).
Below is an example, and the resulting, smoother fit for the derivative after some manual work of trying out different knot positions:
with(df, {
kn <- c(0, c(50, 100, 200, 350, 500, 1500) / max(time), 1)
s <- smooth.spline(time, yield, cv = T)
s2 <- smooth.spline(time, yield, all.knots = kn)
ds <- predict(s, d = 1)
ds2 <- predict(s2, d = 1)
np <- list(mfrow = c(2, 1), mar = c(4, 4, 1, 2))
withr::with_par(np, {
plot(time, yield)
lines(s)
lines(s2, lty = 2, col = 'red')
plot(ds, type = 'l', ylim = c(0, 0.15))
lines(ds2, lty = 2, col = 'red')
})
})
You can probably fine tune the locations further, but I wouldn't be too concerned about it. The primary fits are already near enough indistinguishable, and I'd say you're asking quite a lot from these data in terms of identifying details about the derivative (this should be evident if you plot(time[-1], diff(yield) / diff(time)) which gives you an impression about the level of information your data carry about the derivative).
Created on 2018-02-15 by the reprex package (v0.2.0).
Related
Interpolate with splines without surpassing next value R
I have a dataset of accumulated data. I am trying to interpolate some missing values but at some points I get a superior value. This is an example of my data: dat <- tibble(day=c(1:30), value=c(278, 278, 278, NA, NA, 302, 316, NA, 335, 359, NA, NA, 383, 403, 419, 419, 444, NA, NA, 444, 464, 487, 487, 487, NA, NA, 487, 487, 487, 487)) My dataset is quite long and when I use smooth.spline to interpolate the missing values I get a value greater than the next observation, which is quite aabsurd considering I am dealing with accumulated data. This is the output I get: value.smspl <- c(278, 278, 278, 287.7574, 295.2348, 302, 316, 326.5689, 335, 359, 364.7916, 377.3012, 383, 403, 419, 419, 444, 439.765, 447.1823, 444, 464, 487, 487, 487, 521.6235, 526.3715, 487, 487, 487, 487) My question is: can you somehow set boundaries for the interpolation so the result is reliable? If so, how could you do it?
You have monotonic data for interpolation. We can use "hyman" method in spline(): x <- dat$day yi <- y <- dat$value naInd <- is.na(y) yi[naInd] <- spline(x[!naInd], y[!naInd], xout = x[naInd], method = "hyman")$y plot(x, y, pch = 19) ## non-NA data (black) points(x[naInd], yi[naInd], pch = 19, col = 2) ## interpolation at NA (red) Package zoo has a number of functions to fill NA values, one of which is na.spline. So as G. Grothendieck (a wizard for time series) suggests, the following does the same: library(zoo) library(dplyr) dat %>% mutate(value.interp = na.spline(value, method = "hyman"))
How to plot two groups of values?
These are my sets of four mean values: meanf1hindi = c(253, 297, 377, 426, 476, 518, 560, 620, 657, 697) meanf2hindi = c(850, 887, 1017, 1080, 1197, 1342, 1694, 1820, 2265) meanf1tamil = c(260, 304, 390, 435, 483, 527, 563, 628, 670, 704) meanf2tamil = c(891, 826, 1018, 1068, 1188, 1355, 1709, 1834, 1976, 2303) I would like to make a linear graph of meanf1hindi and meanf2hindi together, and do the same with meanf1tamil and meanf2tamil. This is what I did so far, and don't know how to proceed further: plot(meanf1hindi, meanf2hindi) Error in xy.coords(x, y, xlabel, ylabel, log) : 'x' and 'y' lengths differ
You get the error because the length differs for your vectors. What you can do is make the two vectors' length the same by removing one value for the longer vector in this case remove one value of meanf1hindi by doing this: > length(meanf1hindi) [1] 10 > length(meanf2hindi) [1] 9 plot(meanf1hindi[-1], meanf2hindi) Output:
glm fitted values mirrored/won't match
I've got a strange problem with plotting the fitted values of a glm. My code is: Data <- data.frame("Sp" = c(111.4, 185, 231, 272.5, 309, 342, 371, 399, 424, 447, 469, 489, 508, 527, 543, 560, 575, 589, 603, 616, 630, 642, 653, 664, 675, 685, 695, 705, 714, 725, 731, 740), "nrC" = 1:32) modell <- glm(Sp ~ nrC, data = Data, family = Gamma) pred <- predict(modell, newdata = data.frame("nrC" = 1:32), type = "response") plot(Data$nrC, Data$Sp, xlim = c(0, 40), ylim = c(50, 1000)) lines(Data$nrC, pred, col = "blue") The blue line representing the fitted values seems to be ok, apart from being horizontally mirrored. I'm relatively new to this, so maybe I'm missing something obvious here, but I can't figure out what's wrong. Doing the same with the data presented here works perfectly fine. I'be grateful for any hints!
The gamma distribution isn't quite right for this data set. The data shown in the plot as you have it formulated shows a square root-ish looking function. Try specifying the model like this: modell <- glm(Sp ~ sqrt(nrC), data = Data, family = gaussian) pred <- predict(modell, newdata = data.frame("nrC" = 1:32), type = "response") plot(Data$nrC, Data$Sp, xlim = c(0, 40), ylim = c(50, 1000)) lines(Data$nrC, pred, col = "blue")
R, ggplot2: Fit curve to scatter plot
I am trying to fit curves to the following scatter plot with ggplot2. I found the geom_smooth function, but trying different methods and spans, I never seem to get the curves right... This is my scatter plot: And this is my best attempt: Can anyone get better curves that fit correctly and don't look so wiggly? Thanks! Find a MWE below: my.df <- data.frame(sample=paste("samp",1:60,sep=""), reads=c(523, 536, 1046, 1071, 2092, 2142, 4184, 4283, 8367, 8566, 16734, 17132, 33467, 34264, 66934, 68528, 133867, 137056, 267733, 274112, 409, 439, 818, 877, 1635, 1754, 3269, 3508, 6538, 7015, 13075, 14030, 26149, 28060, 52297, 56120, 104594, 112240, 209188, 224479, 374, 463, 748, 925, 1496, 1850, 2991, 3699, 5982, 7397, 11963, 14794, 23925, 29587, 47850, 59174, 95699, 118347, 191397, 236694), number=c(17, 14, 51, 45, 136, 130, 326, 333, 742, 738, 1637, 1654, 3472, 3619, 7035, 7444, 13133, 13713, 21167, 21535, 11, 22, 30, 44, 108, 137, 292, 349, 739, 853, 1605, 1832, 3099, 3565, 5287, 5910, 7832, 8583, 10429, 11240, 21, 43, 82, 124, 208, 296, 421, 568, 753, 908, 1127, 1281, 1448, 1608, 1723, 1854, 1964, 2064, 2156, 2259), condition=rep(paste("cond",1:3,sep=""), each=20)) png(filename="TEST1.png", height=800, width=1000) print(#or ggsave() ggplot(data=my.df, aes(x=reads, y=log2(number+1), group=condition, color=condition)) + geom_point() ) dev.off() png(filename="TEST2.png", height=800, width=1000) print(#or ggsave() ggplot(data=my.df, aes(x=reads, y=log2(number+1), group=condition, color=condition)) + geom_point() + geom_smooth(se=FALSE, method="loess", span=0.5) ) dev.off()
This is a very broad question, as you're effectively looking for a model with less variance (more bias), of which there are many. Here's one: ggplot(data = my.df, aes(x = reads, y = log2(number + 1), color = condition)) + geom_point() + geom_smooth(se = FALSE, method = "gam", formula = y ~ s(log(x))) For documentation, see ?mgcv::gam or a suitable text on modeling. Depending on your use case, it may make more sense to make your model outside of ggplot.
How to fit a smooth line to some points but preserve monotonicity
I have the following data points example<-structure(list(y = c(1, 0.961538461538462, 0.923076923076923, 0.884615384615385, 0.846153846153846, 0.807692307692308, 0.769230769230769, 0.730769230769231, 0.730769230769231, 0.730769230769231, 0.687782805429864, 0.687782805429864, 0.641930618401207, 0.596078431372549, 0.596078431372549, 0.54640522875817, 0.496732026143791, 0.496732026143791, 0.496732026143791, 0.496732026143791, 0.496732026143791, 0.496732026143791, 0.496732026143791, 0.496732026143791, 0.496732026143791, 0.496732026143791, 0.496732026143791 ), x = c(0, 59, 115, 156, 268, 329, 353, 365, 377, 421, 431, 448, 464, 475, 477, 563, 638, 744, 769, 770, 803, 855, 1040, 1106, 1129, 1206, 1227)), .Names = c("y", "x"), row.names = c(NA, -27L), class = "data.frame") I would like to fit a smooth line. There are several methods in R to do it, using loess, ksmooth, locpoly etc. Is there any way however to ensure or force that the resulting smoothed line will be monotonic (in the case of the present example monotonically decreasing?)
You can use the scam() function in the scam package for uni- or multivariate smoothing with constraints. The help file, ?scam:::shape.constrained.smooth.terms shows all of the available options. For example, the B-spline basis which is used for smoothing can be penalized to yield monotonically decreasing coefficients with scam(y~s(x,bs="mpd")). require(scam) attach(example) yhat <- predict(scam(y~s(x,bs="mpd")),se=TRUE) plot(x,y) lines(x,y=yhat$fit) lines(x,y=yhat$fit+1.96*yhat$se.fit,lty=2) lines(x,y=yhat$fit-1.96*yhat$se.fit,lty=2)