I have a data that I need to analyse.
The data consists of a series of numbers (floating point) represent duration in milliseconds.
From the duration I need to calculate frequency of those events (occurrence in seconds). So I simply calculate it like
occurrence per second = (1000/ time in milliseconds)
Now I need to find the average occurrence of that event in seconds.
But I am not sure which would be accurate order of operation.
Should I average the duration first and then calculate the average occurrence by
average occurrence = (1000/average time)
or I should calculate the frequency for each duration and average the result?
Both case result varies a bit. So I am not sure which pne would be correct approach.
Example:
Say we are measure frame rate of a device,
Each frame take x milliseconds to draw.
From that we can say
frame per second = (1000/x)
Now if my data has 1000 duration,
Either I can average them and get a average duration of a frame and get a frame per second = (1000/average duration)
or
we calculate 1000 frame per seconds first,
frame per seconds = (1000/duration)
and average those 100 fps value?
which one is correct?
Any suggestions?
You should choose the first method: Calculate the average duration of each frame and then let avg_fps = 1000 / avg_duration_in_milliseconds, or, probably easier: avg_fps = number_of_frames / total_duration_in_seconds. This gives the same thing.
Example:
Say you have 3 frames in one second of durations 200ms, 300ms and 500ms. Since you have 3 frames in 1 second, the avg_fps is 3.
The average duration is 333.33ms which gives the right result (1000/333.33 = 3). But if you calculate the individual fps of each frame you get (1000/200 = 5), (1000/300 = 3.33) and (1000/500 = 2). The average of 5, 3.33 and 2 is 3.44 - the highest values will skew the result in the wrong direction. So choose the first method instead.
Related
I’m wondering which DolphinDB function can calculate the time difference. For example, I have two values of DATETIME type, A and B. How can I calculate the time difference between A and B in years? The output should be like 0.5 years.
Subtract variable B from A and you can get the difference in seconds. Then divide the result by 365 * 86400 (which is the total number of seconds in a day) to obtain the yearly difference. For example:
(2022.10.18T12:00:00 - 2022.04.18T00:00:00) \ (365 * 86400 )
The result is 0.50274, meaning the difference between the two dates is 0.50274 years.
I am trying to extract the max values of CO2ppm (column E) that were logged every second over 1 hour (column D) for a total of 60 minutes (rows ~3300). I have column D for time (in HH:MM:SS) and CO2ppm (numeric). I have 50 CO2 samples that I collected that correspond with a minute (e.g. I collected sample #1 at minute 20:54 in F2), but the data is logging every second within that minute, and I want the the highest CO2 ppm in that minute).
The =MAX(IF(D:D=A2,E:E)) works to return the max value CO2ppm when I use the target value as the date (A2) for the entire day of sampling, but it does not work when I try to match my target minute (F2, 20:54) with the column D (HH:MM:SS). I tried to convert this column to text using =TEXT(D:D,"H:M") so that the target value will match the values of minute, excluding all of the seconds, but with no luck.
How can I match my minute (F2) with the range of rows that have that minute (20:54:xx, column D) to find the max value in column E?
Example data:
Thank you!
An easy way to do this would be to add a helper column with the timestamp stripped of the second component.
However in case that is not an option, you could use a formula like the following, which strips out the seconds from the timestamps in column D:
=MAX(IF((D2:D5-SECOND(D2:D5)/86400)=F2,E2:E5))
Depending on your version of Excel, you may have to confirm the formula with Ctrl+Shift+Enter.
I need to create a plot of the body temperatures of mice. I have data points collected every 15 minutes over the course of seven days. I also have the calculated mean temperature at each timepoint for the plot. The next step is calculating the standard error of each of these mean temperatures, taking into account all seven days' worth of temperature readings. This is an image of the extended data I am working from:
https://imgur.com/ukk0iOt
I also have a separate, condensed data frame that is the mean_temp from above averaged over seven days for every timepoint, so only one temperature reading each for 24 hours worth of timepoints. It is 96 rows and only contains columns for time and mean_temp24.
With the following code, I am only able to calculate a single standard error for all the timepoints (I know it's wrong but am having a heck of a time finding a solution). I am also unable to calculate standard error from the condensed 24-hour dataset since the full seven days' worth of temperatures are not present.
Adding column with mean temperatures (7 days) of three mice to data frame 'df'
df=cbind(df,"mean_temp"=rowMeans(df[,3:5],na.rm=TRUE))
Trying to calculate standard deviation for each timepoint, to start with
times = unique(df$time)
Function to achieve individual standard errors per row
for (current_time in times){
df$se=sd(df$mean_temp24, na.rm=T)/sqrt(3-1)
}
Ideally, I will end up with a data frame that is 96 lines (each a 15-minute interval timepoint) for 24 hours of temperature data, where the values are the means of the seven temperatures for each timepoint ("mean_temp" from the image of my data frame). I will also have an additional column for standard error, which takes into account the 7 temperature values used to calculate the mean temperature in the final, 24-hour dataset.
The actual output is a single, identical SE for every timepoint in the full dataset that is not condensed to 24 hours.
Use ddply from the plyr package. The function f is called for every unique combination of dt and time:
f = function(x) {
n3 = length(which(!is.na(x[,3])))
n4 = length(which(!is.na(x[,4])))
n5 = length(which(!is.na(x[,5])))
data.frame(
mean3 = mean(x[,3], na.rm=TRUE),
mean4 = mean(x[,4], na.rm=TRUE),
mean5 = mean(x[,5], na.rm=TRUE),
se3 = sd(x[,3], na.rm=TRUE)/sqrt(n3),
se4 = sd(x[,4], na.rm=TRUE)/sqrt(n4),
se5 = sd(x[,5], na.rm=TRUE)/sqrt(n5)
)
}
ddply(df, .(dt,time), f)
I would like to scale & center some data, I know how to scale it with
(scale(data.test[,1],center=TRUE,scale=TRUE))
I have 365 observations (one year), and would like to scale & center my data for a lookback period of 20 days.
For example I would like to do that:
"Normalized for a 20day lookback period" means that to scale my first value 01/01/2014 (dd/mm/yy) I have to scale it only with the 20 days before. So with values from the 11/12/13 to 31/12/13
And for the 02/01/14 scale it from the 12/12/13 to the 01/01/14 etc
Normalize the data would be
= ( the data - the mean of all data / standard deviation of all the data (see my code )
But as I want "20 day lookback period" means that I have to only look at the 20 last values it would be
= (the data - the mean of the 20 previous data) / standard deviation of the 20 previous data
I thought to make a loop maybe? As I am very new to R I don't know how to write a loop in R or even if there is a better way to do what I want...
If you could help me with this.
You want a 20 days lookback : lookback<-20 data.scale<-c() #Create
a vector for the data scaled for(i in lookback:nrow(data)){
mean<-mean(data[i-(lookback-1):i,1],na.rm=T)
sd<-sd(data[i-(lookback-1):i,1],na.rm=T)*sqrt(((lookback-1))/lookback)
data.scale<-c(data.scale,(data[i,1]-mean)/sd) }
for the row 20 you want to normalized with the data from day 1 to day 20, day 21 from day 2 to day 21 and so on...
I have a data set (3.2 million rows) in R which consists of pairs of time (milliseconds) and volts. The sensor that gathers the data only runs during the day so the time is actually the milliseconds since start-up that day.
For example, if the sensor runs 12 hours per day, then the maximum possible time value for one day is 43,200,000 ms (12h * 60m * 60s * 1000ms).
The data is continually added to a single file, which means there are many overlapping time values:
X: [1,2,3,4,5,1,2,3,4,5,1,2,3,4,5...] // example if range was 1-5 for one day
Y: [voltage readings at each point in time...]
I would like to separate each "run" into unique data frames so that I could clearly see individual days. Currently when I plot the entire data set it is incredibly muddy because in fact all of the days are being shown in the single plot. Thanks for any help.
If your data.frame df has columns X and Y, you can use diff to find every time X goes down (meaning a new day, it sounds like):
df$Day = cumsum(c(1, diff(df$X) < 0))
Day1 = df[df$Day==1,]
plot(Day1$X, Day1$Y)