So, I apologise in advance for my poor attempt at explaining myself. I am rather lost.
Summary:
I am working with the eyelinker package in R to analyse pupil size data in a time-series fashion.
I have managed to create a set of intervals where blinks start and end (extendedBlinks, they extend 150 milliseconds each direction (1000Hz).
# Define set of intervals for blinks
Blk <- cbind(df$blinks$stime, df$blinks$etime)
# Extend blinks (100 milliseconds each way)
extendedBlinks <- Intervals(Blk) %>% expand(150, "absolute")
head(extendedBlinks)
output:
Object of class Intervals
6 intervals over R:
[4485724, 4486141]
[4485984, 4486657]
[4486549, 4486853]
[4486595, 4487040]
[4486800, 4489142]
[4498990, 4499339]
In my dataframe, I have PSL (Pupil Size Left), PSR (Pupil Size Right), and time (relative to the eyetracker, and has the same values as the intervals shown above.
So, I want to get the PSL/PSR (for the sake of the example, let's just stick to getting the PSL).
I've tried many things, nothing seems to work for me. I want to replace the given values in y1 with extendedBlinks[1,1] and extendedBlinks[1,2] respectively (and then iterate over the intervals to interpolate the blinks.
# Interpolation
x1 <- c(extendedBlinks[1,1],extendedBlinks[1,2])
y1 <- c(500, 550)
interp <- approx(x1,y1, n = extendedBlinks[1,2]-extendedBlinks[1,1])
plot(interp)
Again, sorry for the poorly worded question. I'll edit as I receive feedback to try and make it clearer.
Any ideas?
Cheers!
Related
I have about 50 datasets that include all trades within a timeframe of 30 days for about 10 pairs on 5 exchanges. All pairs are of the same asset class, meaning they are strongly correlated and expect to have similar properties, but are on different scales. An example of this data would be
set.seed(1)
n <- 1000
dates <- seq(as.POSIXct("2019-08-05 00:00:00", tz="UTC"), as.POSIXct("2019-08-05 23:59:00", tz="UTC"), by="1 min")
x <- data.frame("t" = sort(sample(dates, 1000)),"p" = cumsum(sample(c(-1, 1), n, TRUE)))
Roughly, I need to identify the relevant local minima and maxima, which happen daily. The yellow marks are my points of interest. Unlike this example, there is usually only one such point per day and I consider each day separately. However, it is hard to filter out noise from my actual points of interest.
My actual goal is to find the exact point, at which the pair started to make a jump and the exact point, at which the jump is over. This needs to be as accurate as possible, as I want to observe which asset moved first and which asset followed at which point in time (as said, they are highly correlated).
Between two extreme values, I want to minimize the distance and maximize the relative/absolute change, as my points of interest are usually close to each other and their difference is quite large.
I already looked at other questions like
Finding local maxima and minima and Algorithm to locate local maxima and also this algorithm that has the same goal. However, my dataset is extremely noisy. I already reduced the dataset to 5-minute intervals, however, this has led to omitting the relevant points in the functions to identify local minima & maxima. Therefore, this was a not good solution given my goal.
How can I achieve my goal with a quite accurate algorithm? Manually skimming through all the time-series is not an option, since this would require me to evaluate 50 * 30 time-series manually, which is too time-consuming. I'm really puzzled and trying to find a suitable solution for a week.
If more code snippets are demanded, I'm happy to share, however they didn't give me meaningful results, which would be opposed to the idea of providing a minimum working example, therefore I decided to leave them out for now.
EDIT:
First off, I updated the plot and added timestamps to the dataset to give you an idea (the actual resolution). Ideally, the algorithm would detect both jumps on the left. The inner two dots because they're closer together and jump without interception, and the outer dots because they're more extreme in values. In fact, this maybe answers the question whether the algorithm is allowed to look into the future. Yes, if there's another local extrema in the range of, say, 30 observations (or 30 minutes), then ignore the intermediate local extrema.
In my data, jumps have been from 2% - ~ 15%, such that a jump needs to be at least 2% to be considered. And only if a threshold of 15 (this might be adaptable) consecutive steps in the same direction before / after the peaks and valleys is reached.
A very naive approach was to subset the data around the global minimum and maximum of a day. In most cases, this has denoised data and worked as an indicator. However, this is not robust when the global extrema are not in the range of the jump.
Hope this clarifies why this isn't a statistical question (there are some tests to determine whether a jump has happened, but not for jump arrival time afaik).
In case anyone wants a real example:
this is a corresponding graph, this is the raw data of the relevant period and this is the reduced dataset.
Perhaps as a starting point, look at function streaks
in package PMwR (which I maintain). A streak is
defined as a move of a specified size that is
uninterrupted by a countermove of the same size. The
function works with returns, not differences, so I add
100 to your data.
For instance:
set.seed(1)
n <- 1000
x <- 100 + cumsum(sample(c(-1, 1), n, TRUE))
plot(x, type = "l")
s <- streaks(x, up = 0.12, down = -0.12)
abline(v = s[, 1])
abline(v = s[, 2])
The vertical lines show the starts and ends of streaks.
Perhaps you can then filter the identified streaks by required criteria such as length. Or
you may play around with different thresholds for up
and down moves (though this is not really recommended
in the current implementation, but perhaps the results
are good enough). For instance, up streaks might look as follows. A green vertical shows the start of a streak; a red line shows its end.
plot(x, type = "l")
s <- streaks(x, up = 0.12, down = -0.05)
s <- s[!is.na(s$state) & s$state == "up", ]
abline(v = s[, 1], col = "green")
abline(v = s[, 2], col = "red")
I'm trying to find out the peak frequencies hidden in my data using the fft() method in R. While preparing the data, a more experienced user recommends to create a "mask" (more after explaining the details), that does give me the exact diagram I'm looking for. The problem is, I don't understand what it does or why it's needed.
To give some context, I'm working with .txt files with around 12000 entries each. It's voltage vs. time information, and the expected result is just a sinusoidal wave with a clear peak frequency that should be close to 1-2 Hz. This is an example of what one of those files look like:
I've been trying to use the Fast Fourier Transform method fft() implemented in R to find the peak frequencies and get a diagram that reflected them clearly. At first, I calculate some things that I understand are going to be useful, like the Nyquist frequency and the range of frequencies I'll show in the final graph:
n = length(variable)
dt = time[5]-time[4]
df = 1/(max(time)) #Find out the "unit" frequency
fnyquist = 1/(2*dt) #The Nyquist frequency
f = seq(-fnyquist, fnyquist-df, by=df) #These are the frequencies I'll plot
But when I plot the absolute value of what fft(data) calculates vs. the range of frequencies, I get this:
The peak frequency seems to be close to 50 Hz, but I know that's not the case. It should be close to 1 Hz. I'm a complete newbie in R and in Fourier analysis, so after researching a little, I found in a Swiss page that this can be solved by creating a "mask", which is actually just a vector with a repeatting patern (1, -1, 1, -1...) with the same length as my data vector itself:
mask=rep(c(1, -1),length.out=n)
Then if I multiply my data vector by this mask and plot the results:
results = mask*data
plot(f,abs(fft(results)),type="h")
I get what I was looking for. (This is the graph after limiting the x-axis to a reasonable scale).
So, what's the mask actually doing? I undestand it's changing my data point signs in an alternate manner, but I don't get why it would take the infered peak frequencies from ~50 Hz to the correct result of ~1 Hz.
Thanks in advance!
Your "mask" is one of two methods of performing an fftshift, which is commonly done to center the 0 Hz output of an FFT in the middle of a graph or plot (instead of at the left edge, with the negative frequencies wrapping around to the right edge).
To perform an fftshift, you can hetrodyne or modulate your data (by Fs/2) before the FFT, or simply do a circular shift by 50% after the FFT. Both produce the same result. They are the same due to the shift property of the DFT.
I'm working with some data that has a few major outliers, mostly due to the technology used to capture the data. I removed these to normalize the data; however, for the nature of the work, I've been asked to visualize every participant's results in a series of graphs in order to compare performances. I'm a little new to R, so while the normalization wasn't difficult, I'm a little stumped as to how I might go about re-introducing these outliers to the scale of the normalized data. Is there a way to scale outliers to previously normalized data (mean=0) without skewing the data?
EDIT: I realize I left a lot of info out (still new to asking questions here), so here's an example of what my process looks like right now:
#example data of 20 participants, 18 of which are normal-range and 2 of which
#are outliers in a data frame
time <- rnorm (18, mean = 30, sd = 10)
distance <- rnorm(18, mean = 100, sd = 20)
time <- c(time, 2, 100)
distance <- c(distance, 30, 1000)
df <- data.frame(time, distance)
The outliers were mostly known due to the nature of the data collection, so removed them:
dfClean <- df[-c(19, 20),]
And plotted the data to check for normalcy after (step skipped here because data was generated to be normal).
From there, I normalized the columns in the data set so that each variable would have a mean of 0 and a st of 1 so they could be plotted together. The goal is to use this as a "normal" range to be able to visualize spread and outliers in future data (accent on visualization).
#using package clusterSim
dfNorm <- data.Normalization(dfClean, type="n13", normalization = "column")
The problem is, I'm not sure how to scale outliers to this range afterwards...or if I'm even understanding the scale function correctly. So, how do I plot all the subjects in the original df, including outliers, on a normalized mean=0 scale?
I am not sure if we can provide any external links to solve stackoverflow's issue.
Still you can refer below links to relove your problem-
https://www.r-bloggers.com/identify-describe-plot-and-remove-the-outliers-from-the-dataset/
I used this many times and found it useful.
I'm new to R and I am attempting to take a set of time series and run them through a Conditional Inference Tree to help classify the shape of the time series. The problem is that not all of the time sereis are of the same number of periods. I am trying to expand each time series to be 30 periods long, but still maintain the same "shape". This is as far as I have got
Require(zoo)
test<-c(606,518,519,541,624,728,560,512,777,728,1014,1100,930,798,648,589,680,635,607,544,566)
accordion<-function(A,N){
x<-ts(scale(A), start=c(1,1), frequency=1)
X1 <- zoo(x,seq(from = 1, to = N, by =(N-1)/(length(x)-1) ))
X2<-merge(X1, zoo(order.by=seq(start(X1), end(X1)-1, by=((N-1)/length(x))/(N/length(x)))))
X3<-na.approx(X2)
return(X3)}
expand.test<-accordion(test,30)
plot(expand.test); lines(scale(test))
length(expand.test)
The above code, scales the time series and then evenly spaces it out to 30 periods and interpolates the missing values. However, the length of the returned series is 42 units and not 30, however it retains the same "shape" as the orignal time series. Does anyone know how to modify this so that the results produced by the function accordian are 30 periods long and the time series shape remains relatively unchanged?
I think there's a base R solution here. Check out approx(), which does linear (or constant) interpolation with as many points n as you specify. Here I think you want n = 30.
test2 <- approx(test, n=30)
plot(test2)
points(test, pch="*")
This returns a list test2 where the second element y is your interpolated values. I haven't yet used your time series object, but it seems that was entirely interior to your function, correct?
Learning R language - I know how to do a moving average but I need to do more - but I am not a statistician - unfortunately all the docs seem to be written for statisticians.
I do this in excel a lot, it's really handy for analysis of operational activities.
Here are the fields on each row to make bollinger bands:
Value could be # of calls, complaint ratio, anything
TimeStamp | Value | Moving Average | Moving STDEVP | Lower Control | Upper Control
Briefly, the moving avg and the stdevP point to the prior 8 or so values in the series. Lower control at a given point in time is = moving average - 2*moving stdevP and upper control = moving average + 2*moving stdevP
This can easily be done in excel for a single file, but if I can find a way to make R work R will be better for my needs. Hopefully faster and more reliable when automated, too.
links or tips would be appreciated.
You could use the function rollapply() from the zoo package, providing you work with a zoo series :
TimeSeries <- cumsum(rnorm(1000))
ZooSeries <- as.zoo(TimeSeries)
BollLines <- rollapply(ZooSeries,9,function(x){
M <- mean(x)
SD <- sd(x)
c(M,M+SD*2,M-SD*2)
})
Now you have to remember that rollapply uses a centered frame, meaning that it takes the values to the left and the right of the current day. This is also more convenient and more true to the definition of the Bollinger Band than your suggestion of taking x prior values.
If you don't want to convert to zoo, you can use the vectors as well and write your own function. I added an S3 based plotting function that allows you to easily plot the calculations as well. With these functions, you could do something like :
TimeSeries <- cumsum(rnorm(1000))
X <- BollingerBands(TimeSeries,80)
plot(X,TimeSeries,type="l",main="An Example")
to get :
The function codes :
BollingerBands <- function(x,width){
Start <- width +1
Stop <- length(x)
Trail <- rep(NA,ceiling(width/2))
Tail <- rep(NA,floor(width/2))
Lines <- sapply(Start:Stop,function(i){
M <- mean(x[(i-width):i])
SD <- sd(x[(i-width):i])
c(M,M+2*SD,M-2*SD)
})
Lines <- apply(Lines,1,function(i)c(Trail,i,Tail))
Out <- data.frame(Lines)
names(Out) <- c("Mean","Upper","Lower")
class(Out) <- c("BollingerBands",class(Out))
Out
}
plot.BollingerBands <- function(x,data,lcol=c("red","blue","blue"),...){
plot(data,...)
for(i in 1:3){
lines(x[,i],col=lcol[i])
}
}
There is an illustration in the R Graph Gallery (65) giving code both for calculating the bands and for plotting share prices.
The 2005 code still seems to work six years later and will give IBM's current share price and going back several months
The most obvious bug is the width of the bandwidth and volume lower charts which have been narrowed; there may be another over the number of days covered.