R Time series - having trouble making bollinger lines - need simple example please - r

Learning R language - I know how to do a moving average but I need to do more - but I am not a statistician - unfortunately all the docs seem to be written for statisticians.
I do this in excel a lot, it's really handy for analysis of operational activities.
Here are the fields on each row to make bollinger bands:
Value could be # of calls, complaint ratio, anything
TimeStamp | Value | Moving Average | Moving STDEVP | Lower Control | Upper Control
Briefly, the moving avg and the stdevP point to the prior 8 or so values in the series. Lower control at a given point in time is = moving average - 2*moving stdevP and upper control = moving average + 2*moving stdevP
This can easily be done in excel for a single file, but if I can find a way to make R work R will be better for my needs. Hopefully faster and more reliable when automated, too.
links or tips would be appreciated.

You could use the function rollapply() from the zoo package, providing you work with a zoo series :
TimeSeries <- cumsum(rnorm(1000))
ZooSeries <- as.zoo(TimeSeries)
BollLines <- rollapply(ZooSeries,9,function(x){
M <- mean(x)
SD <- sd(x)
c(M,M+SD*2,M-SD*2)
})
Now you have to remember that rollapply uses a centered frame, meaning that it takes the values to the left and the right of the current day. This is also more convenient and more true to the definition of the Bollinger Band than your suggestion of taking x prior values.
If you don't want to convert to zoo, you can use the vectors as well and write your own function. I added an S3 based plotting function that allows you to easily plot the calculations as well. With these functions, you could do something like :
TimeSeries <- cumsum(rnorm(1000))
X <- BollingerBands(TimeSeries,80)
plot(X,TimeSeries,type="l",main="An Example")
to get :
The function codes :
BollingerBands <- function(x,width){
Start <- width +1
Stop <- length(x)
Trail <- rep(NA,ceiling(width/2))
Tail <- rep(NA,floor(width/2))
Lines <- sapply(Start:Stop,function(i){
M <- mean(x[(i-width):i])
SD <- sd(x[(i-width):i])
c(M,M+2*SD,M-2*SD)
})
Lines <- apply(Lines,1,function(i)c(Trail,i,Tail))
Out <- data.frame(Lines)
names(Out) <- c("Mean","Upper","Lower")
class(Out) <- c("BollingerBands",class(Out))
Out
}
plot.BollingerBands <- function(x,data,lcol=c("red","blue","blue"),...){
plot(data,...)
for(i in 1:3){
lines(x[,i],col=lcol[i])
}
}

There is an illustration in the R Graph Gallery (65) giving code both for calculating the bands and for plotting share prices.
The 2005 code still seems to work six years later and will give IBM's current share price and going back several months
The most obvious bug is the width of the bandwidth and volume lower charts which have been narrowed; there may be another over the number of days covered.

Related

Interpolating blinks in eyetracking data - start/end intervals as time points

So, I apologise in advance for my poor attempt at explaining myself. I am rather lost.
Summary:
I am working with the eyelinker package in R to analyse pupil size data in a time-series fashion.
I have managed to create a set of intervals where blinks start and end (extendedBlinks, they extend 150 milliseconds each direction (1000Hz).
# Define set of intervals for blinks
Blk <- cbind(df$blinks$stime, df$blinks$etime)
# Extend blinks (100 milliseconds each way)
extendedBlinks <- Intervals(Blk) %>% expand(150, "absolute")
head(extendedBlinks)
output:
Object of class Intervals
6 intervals over R:
[4485724, 4486141]
[4485984, 4486657]
[4486549, 4486853]
[4486595, 4487040]
[4486800, 4489142]
[4498990, 4499339]
In my dataframe, I have PSL (Pupil Size Left), PSR (Pupil Size Right), and time (relative to the eyetracker, and has the same values as the intervals shown above.
So, I want to get the PSL/PSR (for the sake of the example, let's just stick to getting the PSL).
I've tried many things, nothing seems to work for me. I want to replace the given values in y1 with extendedBlinks[1,1] and extendedBlinks[1,2] respectively (and then iterate over the intervals to interpolate the blinks.
# Interpolation
x1 <- c(extendedBlinks[1,1],extendedBlinks[1,2])
y1 <- c(500, 550)
interp <- approx(x1,y1, n = extendedBlinks[1,2]-extendedBlinks[1,1])
plot(interp)
Again, sorry for the poorly worded question. I'll edit as I receive feedback to try and make it clearer.
Any ideas?
Cheers!

Normalizing an R stars object by grid area?

first post :)
I've been transitioning my R code from sp() to sf()/stars(), and one thing I'm still trying to grasp is accounting for the area in my grids.
Here's an example code to explain what I mean.
library(stars)
library(tidyverse)
# Reading in an example tif file, from stars() vignette
tif = system.file("tif/L7_ETMs.tif", package = "stars")
x = read_stars(tif)
x
# Get areas for each grid of the x object. Returns stars object with "area" in units of [m^2]
x_area <- st_area(x)
x_area
I tried loosely adopting code from this vignette (https://github.com/r-spatial/stars/blob/master/vignettes/stars5.Rmd) to divide each value in x by it's grid area, and it's not working as expected (perhaps because my objects are stars and not sf?)
x$test1 = x$L7_ETMs.tif / x_area # Some computationally intensive calculation seems to happen, but doesn't produce the results I expect?
x$test1 = x$L7_ETMs.tif / x_area$area # Throws error, "non-conformable arrays"
What does seem to work is the following.
x %>%
mutate(test1 = L7_ETMs.tif / units::set_units(as.numeric(x_area$area), m^2))
Here are the concerns I have with this code.
I worry that as I turn the x_area$area (a matrix, areas in lat/lon) into a numeric vector, I may mess up the lat/lon matching between the grid and it's area. I did some rough testing to see if the areas match up the way I expect them to, but can't escape the worry that this could lead to errors that are difficult to catch.
It just doesn't seem clean that I start with "x_area" in the correct units, only to remove then set the units again during the computation.
Can someone suggest a "cleaner" implementation for what I'm trying to do, i.e. multiplying or dividing grids by its area while maintaining units throughout? Or convince me that the code I have is fine?
Thanks!
I do not know how to improve the stars code, but you can compare the results you get with this
tif <- system.file("tif/L7_ETMs.tif", package = "stars")
library(terra)
r <- rast(tif)
a <- cellSize(r, sum=FALSE)
x <- r / a
With planar data you could do this when it is safe to assume there is no distortion (generally not the case, but it can be the case)
y <- r / prod(res(r))

How would I use FFT to analyse an audio wave in R, Rstudio

I am trying to use R to find the Harmonics within a sound file, I would also like to plot these findings as a Frequency(Hz)(x) Strength(y) graph to show the harmonics found. I've found it hard so far to find a helpful, working example of FFT being used on an audio file in R as most of the tutorials work with a premade cosine or sine wave.
I have found an example on the https://www.rdocumentation.org/packages/stats/versions/3.6.2/topics/fft page under community code, but it did not work very well when I attempted to use it on the audio file.
#my addition (I've left the wave file space empty deliberately)
voice <- readWave("",from=0, to=Inf, units=c("seconds"), header=FALSE, toWaveMC=NULL)
#the community code
x <- wavobj#left
fs <- wavobj#samp.rate
nbits <- wavobj#bit
x <- x[1:(fs*5)]
y <- fft(x)
y.tmp <- Mod(y)
y.tmp <- Mod(y)
y.ampspec <- y.tmp[1:(length(y)/2+1)]
y.ampspec[2:(length(y)/2)] <- y.ampspec[2:(length(y)/2)] * 2
f <- seq(from=0, to=fs/2, length=length(y)/2+1)
plot(f, y.ampspec, type="h", xlab="Frequency (Hz)", ylab="Amplitude Spectrum", xlim=c(0, 350))
Please send some help!
If you're OK to use the seewave package, it has some helpful functions including meanspec which works out the mean frequency spectrum. Here's an example.
library(tuneR)
library(seewave)
data('sheep')
ms <- meanspec(sheep)
The ms object is a two dimensional array where the first column is the frequency and the second is the amplitude.

Smoothing directional (angular) data in R

I'm trying to deal with some motion analysis software tracking errors after the data is exported. For some frames the direction is rotated by 180 degrees from the "true" direction.
I would like to smooth the data set so that when the direction changes by ~180 in a single frame, it is transformed to reflect the actual angle.
Is anyone aware of a way to solve this using any of the circular statistics packages in R language such as CircStats? Alternatively, I could imagine a script that checks if frame to frame variation is near 180 degrees, subtracts 180 if this is true, then moves to the next frame. Does this sound like a reasonable approach and would it be easily implemented in R?
I'm afraid I don't have the rep to upload a figure describing the problem (it's very easy to see), but here is a example dataset.
Thanks for the help. I've been a longtime user of stack overflow but have never failed to find my answer before needing to ask before.
David
edit - attached image
It was an interesting problem to solve! It needs to be iterative since whenever a value is changed, it can solve a problem but create another... Let me know if it does the trick.
threshold <- 90
correction <- 180
dat <- read.table("angle_data.txt", header=TRUE)
dat <- ts(dat)
repeat {
diffs <- dat - lag(dat, k = 1)
probl <- which(abs(diffs[,2]) > threshold)
if(length(probl)==0)
break
obs.1 <- dat[probl[1], 2]
obs.2 <- dat[probl[1] + 1, 2]
dat[probl[1] + 1, 2] <- obs.2 + sign(obs.1 - obs.2) * 180
}

copy rasters within stack

I would like to compare soil moisture rasters available every 3 days to rainfall rasters available daily. I make a stack of each and resample to the appropriate resolution. Now, to compare the stacks easily, it'd be nice to be able to copy each layer in the soil moisture stack and insert it next to itself twice. This is basically the same question as Stacking an existing RasterStack multiple times
except that I need the big stack to be sorted so that all of the rasters are in time order. Is there a way to do this?
(I know that I could copy the files before stacking them the 1st time, but this would require resampling 3x the stack. Since resampling is the slowest part of my script, there should be a better way.)
Something like this?
# example data
r <- raster(ncol=10, nrow=10)
r[]=1:ncell(r)
x <- brick(r,r,r,r,r,r)
x <- x * 1:6
y <- list()
for (i in 1:nlayers(x)) {
r <- raster(x, i)
y <- c(y, r, r, r)
}
s <- stack(y)

Resources