2grouped bar plot in R - r

I am trying to create a graph with months that represent the seasons, that every seasons will be in different color with the legend below.
seasons: December, January, February- Winter
March, April, May- Spring
June, July, August- Summer
September, October, November- Autumn
and the the Airports will be in different texture.
Airports: EWR, JFK, LGA
My data is:
1 2 3 4 5 6 7 8 9 10 11 12
EWR 24 23 28 26 27 31 30 23 14 17 16 33
JFK 16 21 20 21 22 28 32 23 14 13 12 25
LGA 14 16 18 20 19 26 26 20 13 15 15 24
What I have made is:
but I want the months on the same seasons to be in the same color and the Airports to be in the same texture.
Thanks!

library(reshape2)
library(ggplot2)
df = data_frame(month = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12),
EWR = c(24, 23, 28, 26, 27, 31, 30, 23, 14, 17, 16, 33),
JFK =c(16, 21, 20, 21, 22, 28, 32, 23, 14, 13, 12, 25),
LGA = c(14, 16, 18, 20, 19, 26, 26, 20, 13, 15, 15, 24),
season = c("Winter", "Winter", "Spring", "Spring",
"Spring", "Summer", "Summer", "Summer",
"Autumn", "Autumn", "Autumn", "Winter"))
df = melt(df, id.vars = c("month", "season"))
head(df)
month season variable value
1 1 Winter EWR 24
2 2 Winter EWR 23
3 3 Spring EWR 28
4 4 Spring EWR 26
5 5 Spring EWR 27
6 6 Summer EWR 31
ggplot(df, aes(x = as.factor(month), y = value, fill = season, col = season)) +
geom_col(position = "dodge") +
facet_grid(~variable)

Related

Calculate a Weighted Rolling Average by rows by group in r?

I have a dataframe games_h. This is just a snippet of the table but it has many teams and is sorted by date, team, game number. I am trying to create a weighted rolling average grouped by the team. I would like the most recent game to be weighted more than two games ago. So the weights would be (Game_1 * 1+ Game_2 *2)/3 or weights equal to 1 with same ratio so weights = c(1-.667, .667).
dput(games_h)
structure(list(GameId = c(16, 16, 37, 37, 57, 57), GameDate = structure(c(17905,
17905, 17916, 17916, 17926, 17926), class = "Date"), NeutralSite = c(0,
0, 0, 0, 0, 0), AwayTeam = c("Virginia Cavaliers", "Virginia Cavaliers",
"Florida State Seminoles", "Florida State Seminoles", "Syracuse Orange",
"Syracuse Orange"), HomeTeam = c("Boston College Eagles", "Boston College Eagles",
"Boston College Eagles", "Boston College Eagles", "Boston College Eagles",
"Boston College Eagles"), Team = c("Virginia Cavaliers", "Boston College Eagles",
"Florida State Seminoles", "Boston College Eagles", "Syracuse Orange",
"Boston College Eagles"), Home = c(0, 1, 0, 1, 0, 1), Score = c(83,
56, 82, 87, 77, 71), AST = c(17, 6, 12, 16, 11, 13), TOV = c(10,
8, 9, 13, 11, 11), STL = c(5, 4, 4, 6, 6, 5), BLK = c(6, 0, 4,
4, 1, 0), Rebounds = c(38, 18, 36, 33, 23, 23), ORB = c(7, 4,
16, 10, 7, 6), DRB = c(31, 14, 20, 23, 16, 17), FGA = c(55, 57,
67, 55, 52, 45), FGM = c(33, 22, 28, 27, 29, 21), X3FGM = c(8,
7, 8, 13, 11, 9), X3FGA = c(19, 25, 25, 21, 26, 22), FTA = c(14,
9, 24, 28, 15, 23), FTM = c(9, 5, 18, 20, 8, 20), Fouls = c(16,
12, 25, 20, 19, 19), Game_Number = 1:6, Count = c(1, 1, 1, 1,
1, 1)), class = c("grouped_df", "tbl_df", "tbl", "data.frame"
), row.names = c(NA, -6L), groups = structure(list(HomeTeam = "Boston College Eagles",
.rows = structure(list(1:6), ptype = integer(0), class = c("vctrs_list_of",
"vctrs_vctr", "list"))), class = c("tbl_df", "tbl", "data.frame"
), row.names = c(NA, -1L), .drop = TRUE))
Here is an example output of the score column.
Here is my failed attempt. The function work correctly but I cannot apply it to the entire dataframe by group.
weighted_avg<-function(x, wt1, wt2) {
rs1 = rollsum(x,1,align = "right")
rs2 = rollsum(x,2,align = "right")
rs1=rs1[-1]
rs3 = rs2 - rs1
weighted_avg= ((rs3 * wt2)+ (rs1*wt1))/(wt1+wt2)
return(weighted_avg)
}
weighted_avg(csum$Score_Y, 2, 1)
apply(csum$Score_Y , 2, weighted_avg, wt1 = 2, wt2=1)
test<-csum %>%
group_by(Team)%>%
group_map(across(c(Score:Fouls), weighted_avg(.x$Team, 2, 1) ))
test<-csum %>%
group_by(Team)%>%
group_walk(across(c(Score:Fouls),weighted_avg(.~,2,1) ))
Here are some notes about the code:
I used slider::slide_dbl function. First we specify the vector for which we would like to compute the moving average Score.
As we need a sliding window of length 2, I used .before argument in slide_dbl to use the previous value and a current value to be used for calculating moving average.
Also I set .complete argument to TRUE to makes sure to only calculate moving average when we have a previous value. In other word we don't have any moveing average in first row.
For more info check the documentation for slider package.
library(tidyverse)
library(slider)
df %>%
group_by(HomeTeam) %>%
summarise(Example = c(NA, slide_dbl(Score, .before = 1, .complete = TRUE,
.f = ~ (.x[1] * 1 + .x[2] * 2) / 3)))
`summarise()` has grouped output by 'HomeTeam'. You can override using the `.groups` argument.
# A tibble: 7 × 2
# Groups: HomeTeam [1]
HomeTeam Example
<chr> <dbl>
1 Boston College Eagles NA
2 Boston College Eagles NA
3 Boston College Eagles 65
4 Boston College Eagles 73.3
5 Boston College Eagles 85.3
6 Boston College Eagles 80.3
7 Boston College Eagles 73
If it is going to calculate moving average for all numeric columns you could try:
df %>%
group_by(HomeTeam) %>%
summarise(across(where(is.numeric), ~ c(NA, slide_dbl(., .before = 1, .complete = TRUE,
.f = ~ (.x[1] * 1 + .x[2] * 2) / 3)))) %>%
ungroup()
`summarise()` has grouped output by 'HomeTeam'. You can override using the `.groups` argument.
# A tibble: 7 × 21
HomeTeam GameId NeutralSite Home Score AST TOV STL BLK Rebounds ORB DRB FGA FGM
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Boston C… NA NA NA NA NA NA NA NA NA NA NA NA NA
2 Boston C… NA NA NA NA NA NA NA NA NA NA NA NA NA
3 Boston C… 16 0 0.667 65 9.67 8.67 4.33 2 24.7 5 19.7 56.3 25.7
4 Boston C… 30 0 0.333 73.3 10 8.67 4 2.67 30 12 18 63.7 26
5 Boston C… 37 0 0.667 85.3 14.7 11.7 5.33 4 34 12 22 59 27.3
6 Boston C… 50.3 0 0.333 80.3 12.7 11.7 6 2 26.3 8 18.3 53 28.3
7 Boston C… 57 0 0.667 73 12.3 11 5.33 0.333 23 6.33 16.7 47.3 23.7
# … with 7 more variables: X3FGM <dbl>, X3FGA <dbl>, FTA <dbl>, FTM <dbl>, Fouls <dbl>,
# Game_Number <dbl>, Count <dbl>

How to create before and after scores in two different columns based on date?

I have two tables first table has stress score recorded at various time points and second table has date of treatment. I want to get the stress scores before and after treatment for each participant who has received the treatment. Also I want a column that gives information on when was the stress score recorded before and after treatment. I do not understand from where do I begin,and what should my code look like.
score.dt = data.table(
participant.index = c(1, 1, 1, 3, 4, 4, 13, 21, 21, 25, 37, 40, 41, 41, 41, 43, 43, 43, 44),
repeat.instance = c(2, 3, 6, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 2, 3, 1, 2, 3, 1),
date.recorded = c(
'2017-07-13',
'2017-06-26',
'2018-09-17',
'2016-04-14',
'2014-03-24',
'2016-05-30',
'2018-06-20',
'2014-08-03',
'2015-07-06',
'2014-12-17',
'2014-09-05',
'2013-06-10',
'2015-10-04',
'2016-11-04',
'2016-04-18',
'2014-02-13',
'2013-05-24',
'2014-09-10',
'2014-11-25'
),
subscale = c(
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress",
"stress"
),
score = c(18, 10, 18, 36, 16, 30, 28, 10, 12, 40, 16, 12, 10, 14, 6, 32, 42, 26, 18)
)
date.treatment.dt = data.table (
participant.index = c(1, 4, 5, 6, 8, 10, 11, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26),
date.treatment = c(
'2018 - 06 - 27',
'2001 - 07 - 16',
'2009 - 12 - 09',
'2009 - 05 - 20',
'2009 - 07 - 22',
'2008-07 - 02',
'2009 - 11 - 25',
'2009 - 09 - 16',
'1991 - 07 - 30',
'2016 - 05 - 25',
'2012 - 07 - 25',
'2007 - 03 - 19',
'2012 - 01 - 25',
'2011 - 09 - 21',
'2000 - 03 - 06',
'2001 - 09 - 25',
'1999 - 12 - 20',
'1997 -07 - 28',
'2002 - 03 - 12',
'2008 - 01 - 23'
))
Desired output columns: is something like this
score.date.dt = c("candidate.index.x", "repeat.instance", "subscale", "score", "date.treatment", "date.recorded", "score.before.treatment", "score.after.treatment", "months.before.treatment", "months.after.treatment")
Here the columns months.before.treatment indicates how many months before treatment the stress score was measured and month.after.treatment indicates how many months after treatment the stress score was measured.
In your example set, you only have four individuals with stress scores that have any rows in the treatment table (participants 1,4,21,and 25). Only one of these, participant 1, has both a pre-treatment stress measures and post-treatment stress measure...
Here is one way to produce the information you need:
inner_join(score.dt,date.treatment.dt, by="participant.index") %>%
group_by(participant.index, date.treatment) %>%
summarize(pre_treatment = min(date.recorded[date.recorded<=date.treatment]),
post_treatment = max(date.recorded[date.recorded>=date.treatment])) %>%
pivot_longer(cols = -(participant.index:date.treatment), names_to = "period", values_to = "date.recorded") %>%
left_join(score.dt, by=c("participant.index", "date.recorded" )) %>%
mutate(period=str_extract(period,".*(?=_)"),
months = abs(as.numeric(date.treatment-date.recorded))/(365.25/12)) %>%
pivot_wider(id_cols = participant.index:date.treatment, names_from = period, values_from=c(date.recorded, subscale, months,score))
Output:
participant.index date.treatment date.recorded_pre date.recorded_post subscale_pre subscale_post months_pre months_post score_pre score_post
<dbl> <date> <date> <date> <chr> <chr> <dbl> <dbl> <dbl> <dbl>
1 1 2018-06-27 2017-06-26 2018-09-17 stress stress 12.0 2.69 10 18
2 4 2001-07-16 NA 2016-05-30 NA stress Inf 178. NA 30
3 21 2000-03-06 NA 2015-07-06 NA stress Inf 184. NA 12
4 25 2002-03-12 NA 2014-12-17 NA stress Inf 153. NA 40
Note: you will have to fix the date inputs to the two source files, like this:
# first correct, your date.treatment column, and convert to date
date.treatment.dt[, date.treatment := as.Date(str_replace_all(date.treatment," ",""), "%Y-%m-%d")]
# second, similarly fix the date column in your stress score table
score.dt[,date.recorded := as.Date(date.recorded,"%Y-%m-%d")]
It seems like there are a few parts to what you're asking. First, you need to merge the two tables together. Here I use dplyr::inner_join() which automatically detects that the candidate.index is the only column in common and merges on that while discarding records found in only one of the tables. Second, we convert to a date format for both dates to enable the calculation of elapsed months.
library(tidyverse)
library(data.table)
library(lubridate)
score.dt <- structure(list(participant.index = c(1, 1, 1, 3, 4, 4, 13, 21, 21, 25, 37, 40, 41, 41, 41, 43, 43, 43, 44), repeat.instance = c(2, 3, 6, 1, 1, 2, 1, 1, 2, 1, 1, 1, 1, 2, 3, 1, 2, 3, 1), date.recorded = c("2017-07-13", "2017-06-26", "2018-09-17", "2016-04-14", "2014-03-24", "2016-05-30", "2018-06-20", "2014-08-03", "2015-07-06", "2014-12-17", "2014-09-05", "2013-06-10", "2015-10-04", "2016-11-04", "2016-04-18", "2014-02-13", "2013-05-24", "2014-09-10", "2014-11-25"), subscale = c("stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress", "stress"), score = c(18, 10, 18, 36, 16, 30, 28, 10, 12, 40, 16, 12, 10, 14, 6, 32, 42, 26, 18)), row.names = c(NA, -19L), class = c("data.table", "data.frame"))
date.treatment.dt <- structure(list(participant.index = c(1, 4, 5, 6, 8, 10, 11, 12, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26), date.treatment = c("2018 - 06 - 27", "2001 - 07 - 16", "2009 - 12 - 09", "2009 - 05 - 20", "2009 - 07 - 22", "2008-07 - 02", "2009 - 11 - 25", "2009 - 09 - 16", "1991 - 07 - 30", "2016 - 05 - 25", "2012 - 07 - 25", "2007 - 03 - 19", "2012 - 01 - 25", "2011 - 09 - 21", "2000 - 03 - 06", "2001 - 09 - 25", "1999 - 12 - 20", "1997 -07 - 28", "2002 - 03 - 12", "2008 - 01 - 23")), row.names = c(NA, -20L), class = c("data.table", "data.frame"))
inner_join(date.treatment.dt, score.dt) %>%
mutate(across(contains("date"), as_date)) %>%
mutate(months.after = interval(date.treatment, date.recorded) %/% months(1)) %>%
mutate(months.before = 0 - months.after)
#> Joining, by = "participant.index"
#> participant.index date.treatment repeat.instance date.recorded subscale
#> 1: 1 2018-06-27 2 2017-07-13 stress
#> 2: 1 2018-06-27 3 2017-06-26 stress
#> 3: 1 2018-06-27 6 2018-09-17 stress
#> 4: 4 2001-07-16 1 2014-03-24 stress
#> 5: 4 2001-07-16 2 2016-05-30 stress
#> 6: 21 2000-03-06 1 2014-08-03 stress
#> 7: 21 2000-03-06 2 2015-07-06 stress
#> 8: 25 2002-03-12 1 2014-12-17 stress
#> score months.after months.before
#> 1: 18 -11 11
#> 2: 10 -12 12
#> 3: 18 2 -2
#> 4: 16 152 -152
#> 5: 30 178 -178
#> 6: 10 172 -172
#> 7: 12 184 -184
#> 8: 40 153 -153
Created on 2022-04-05 by the reprex package (v2.0.1)

Rolling regression on irregular time series

Summary (tldr)
I need to perform a rolling regression on an irregular time series (i.e. the interval may not even be periodic and go from 0, 1, 2, 3... to ...7, 20, 24, 28...) that's simple numeric and does not necessarily require date/time, but the rolling window needs be by time. So if I have a timeseries that is irregularly sampled for 600 seconds and the window is 30, the regression is performed every 30 seconds, and not every 30 samples.
I've read examples, and while I could replicate doing rolling sums and medians by time, I can't seem to figure it out for regression.
The problem
First of all, I have read some of the other questions with regards to performing rolling functions on irregular time series data, such as this: optimized rolling functions on irregular time series with time-based window, and this: Rolling window over irregular time series.
The issue is that the examples provided, so far, are simple for equations like sum or median, but I have not yet figured out how to perform a simple rolling regression, i.e. using lm, that is still based on the same caveat that the window is based on an irregular time series. Also, my timeseries is much, much simpler; no date is necessary, it's simply time "elapsed".
Anyway, getting this right is important to me because with irregular time - for example, a skip in the time interval - may give an over- or underestimate of the coefficients in the rolling regression, as the sample window will include additional time.
So I was wondering if anyone can help me with creating a function that does this in the simplest way? The dataset is based on measuring a variable over time i.e. 2 variables: time, and response. Time is measured every x time elapsed units (seconds, minutes, so not date/time formatted), but once in a while it becomes irregular.
For every row in the function, it should perform a linear regression based on a width of n time units. The width should never exceed n units, but may be floored (i.e. reduced) to accomodate irregular time sampling. So for example, if the width is specified at 20 seconds, but time is sampled every 6 seconds, then the window will be rounded to 18, not 24 seconds.
I have looked at the question here: How to calculate the average slope within a moving window in R, and I tested that code on an irregular time series, but it looks like it's based on regular time series.
Sample data:
sample <-
structure(list(x = c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28,
29, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 47, 48,
49), y = c(50, 49, 48, 47, 46, 47, 46, 45, 44, 43, 44, 43, 42,
41, 40, 41, 40, 39, 38, 37, 38, 37, 36, 35, 34, 35, 34, 33, 32,
31, 30, 29, 28, 29, 28, 27, 26, 25, 26, 25, 24, 23, 22, 21, 20,
19)), .Names = c("x", "y"), class = c("tbl_df", "tbl", "data.frame"
), row.names = c(NA, -46L))
My current code (based on a previous question I referred to). I know it's not subsetting by time:
library(zoo)
clm <- function(z) coef(lm(y ~ x, as.data.frame(z)))
rollme <- rollapplyr(zoo(sample), 10, clm, by.column = F, fill = NA)
The expected output (manually calculated) is below. The output is different from a regular rolling regression -- the numbers are different as soon as the time interval skips at 29 (secs):
NA
NA
NA
NA
NA
NA
NA
NA
NA
-0.696969697
-0.6
-0.551515152
-0.551515152
-0.6
-0.696969697
-0.6
-0.551515152
-0.551515152
-0.6
-0.696969697
-0.6
-0.551515152
-0.551515152
-0.6
-0.696969697
-0.6
-0.551515152
-0.551515152
-0.6
-0.696969697
-0.605042017
-0.638888889
-0.716981132
-0.597560976
-0.528301887
-0.5
-0.521008403
-0.642857143
-0.566666667
-0.551515152
-0.551515152
-0.6
-0.696969697
-0.605042017
-0.638888889
-0.716981132
I hope I'm providing enough information, but let me know (or give me a guide to a good example somewhere) for me to try this?
Other things I have tried:
I've tried converting the time to POSIXct format but I don't know how to perform lm on that:
require(lubridate)
x <- as.POSIXct(strptime(sample$x, format = "%S"))
Update : Added tldr section.
Try this:
# time interval is 1
sz=10
pl2=list()
for ( i in 1:nrow(sample)){
if (i<sz) period=sz else
period=length(sample$x[sample$x>(sample$x[i]-sz) & sample$x<=sample$x[i]])-1
pl2[[i]]=seq(-period,0)
}
#update for time interval > 1
sz=10
tint=1
pl2=list()
for ( i in 1:nrow(sample)){
if (i<sz) period=sz else
period=length(sample$x[sample$x>(sample$x[i]-sz*tint) & sample$x<=sample$x[i]])-1
pl2[[i]]=seq(-period,0)
}
rollme3 <- rollapplyr(zoo(sample), pl2, clm, by.column = F, fill = NA)
> tail(rollme3)
(Intercept) x
41 47.38182 -0.5515152
42 49.20000 -0.6000000
43 53.03030 -0.6969697
44 49.26050 -0.6050420
45 50.72222 -0.6388889
46 54.22642 -0.7169811
For the sake of completeness, here is an answer which uses data.table to aggregate in a non-equi join.
Although there many similar questions, e.g., r calculating rolling average with window based on value (not number of rows or date/time variable), this question deserves an answer on its own as the OP is looking for the coefficients of a rolling regression.
library(data.table)
ws <- 10 # size of sliding window in time units
setDT(sample)[.(start = x - ws, end = x), on = .(x > start, x <= end),
as.list(coef(lm(y ~ x.x))), by = .EACHI]
x x (Intercept) x.x
1: -10 0 50.00000 NA
2: -9 1 50.00000 -1.0000000
3: -8 2 50.00000 -1.0000000
4: -7 3 50.00000 -1.0000000
5: -6 4 50.00000 -1.0000000
6: -5 5 49.61905 -0.7142857
7: -4 6 49.50000 -0.6428571
8: -3 7 49.50000 -0.6428571
9: -2 8 49.55556 -0.6666667
10: -1 9 49.63636 -0.6969697
11: 0 10 49.20000 -0.6000000
12: 1 11 48.88485 -0.5515152
13: 2 12 48.83636 -0.5515152
14: 3 13 49.20000 -0.6000000
15: 4 14 50.12121 -0.6969697
16: 5 15 49.20000 -0.6000000
17: 6 16 48.64242 -0.5515152
18: 7 17 48.59394 -0.5515152
19: 8 18 49.20000 -0.6000000
20: 9 19 50.60606 -0.6969697
21: 10 20 49.20000 -0.6000000
22: 11 21 48.40000 -0.5515152
23: 12 22 48.35152 -0.5515152
24: 13 23 49.20000 -0.6000000
25: 14 24 51.09091 -0.6969697
26: 15 25 49.20000 -0.6000000
27: 16 26 48.15758 -0.5515152
28: 17 27 48.10909 -0.5515152
29: 18 28 49.20000 -0.6000000
30: 19 29 51.57576 -0.6969697
31: 22 32 49.18487 -0.6050420
32: 23 33 50.13889 -0.6388889
33: 24 34 52.47170 -0.7169811
34: 25 35 48.97561 -0.5975610
35: 26 36 46.77358 -0.5283019
36: 27 37 45.75000 -0.5000000
37: 28 38 46.34454 -0.5210084
38: 29 39 50.57143 -0.6428571
39: 30 40 47.95556 -0.5666667
40: 31 41 47.43030 -0.5515152
41: 32 42 47.38182 -0.5515152
42: 33 43 49.20000 -0.6000000
43: 34 44 53.03030 -0.6969697
44: 37 47 49.26050 -0.6050420
45: 38 48 50.72222 -0.6388889
46: 39 49 54.22642 -0.7169811
x x (Intercept) x.x
Please note that rows 10 to 30 where the time series is regularly spaced are identical to OP's rollme.
The call to as.list() forces the result of coef(lm(...)) to appear in separate columns.
The code above uses a right aligned rolling window. However, the code can be easily adapted to support a left aligned window as well:
# left aligned window
setDT(sample)[.(start = x, end = x + ws), on = .(x >= start, x < end),
as.list(coef(lm(y ~ x.x))), by = .EACHI]
With runner one can apply any R function in irregular time series. User has to specify put data to x argument and vector of dates to idx argument (to make windows time dependent). Window width k can be a integer k = 30 or character like in seq.POSIXt k = "30 secs".
First example shows how to obtain both parameters from lm function - output will be a matrix
library(runner)
runner(
x = sample,
k = "30 secs",
idx = sample$datetime,
function(x) {
coefficients(lm(y ~ x, data = x))
}
)
Or one can execute runner separately for each parameter
library(runner)
sample$intercept <- runner(
sample,
k = "30 secs",
idx = sample$datetime,
function(x) {
coefficients(lm(y ~ x, data = x))[1]
}
)
sample$slope <- runner(
sample,
k = "30 secs",
idx = sample$datetime,
function(x) {
coefficients(lm(y ~ x, data = x))[2]
}
)
head(sample, 15)
# datetime x y intercept slope
# 1 2020-04-13 09:27:20 0 50 50.00000 NA
# 2 2020-04-13 09:27:21 1 49 50.00000 -1.0000000
# 3 2020-04-13 09:27:25 2 48 50.00000 -1.0000000
# 4 2020-04-13 09:27:29 3 47 50.00000 -1.0000000
# 5 2020-04-13 09:27:29 4 46 50.00000 -1.0000000
# 6 2020-04-13 09:27:32 5 47 49.61905 -0.7142857
# 7 2020-04-13 09:27:34 6 46 49.50000 -0.6428571
# 8 2020-04-13 09:27:38 7 45 49.50000 -0.6428571
# 9 2020-04-13 09:27:38 8 44 49.55556 -0.6666667
# 10 2020-04-13 09:27:41 9 43 49.63636 -0.6969697
# 11 2020-04-13 09:27:44 10 44 49.45455 -0.6363636
# 12 2020-04-13 09:27:47 11 43 49.38462 -0.6153846
# 13 2020-04-13 09:27:48 12 42 49.38462 -0.6153846
# 14 2020-04-13 09:27:49 13 41 49.42857 -0.6263736
# 15 2020-04-13 09:27:50 14 40 49.34066 -0.6263736
Data with datetime column
sample <- structure(
list(
datetime = c(3, 1, 4, 4, 0, 3, 2, 4, 0, 3, 3, 3, 1, 1, 1, 3, 0, 2, 4, 2, 2,
3, 0, 1, 2, 4, 0, 1, 4, 4, 1, 2, 1, 3, 0, 4, 4, 1, 3, 0, 0, 2,
1, 0, 2, 0) + Sys.time(),
x = c(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 32, 33, 34, 35, 36, 37, 38,
39, 40, 41, 42, 43, 44, 47, 48, 49),
y = c(50, 49, 48, 47, 46, 47, 46, 45, 44, 43, 44, 43, 42, 41, 40, 41, 40, 39,
38, 37, 38, 37, 36, 35, 34, 35, 34, 33, 32, 31, 30, 29, 28, 29, 28, 27,
26, 25, 26, 25, 24, 23, 22, 21, 20,19)
),
.Names = c("x", "y"),
class = c("tbl_df", "tbl", "data.frame"),
row.names = c(NA, -46L)
)

Transform getter value to data frame

How can i transform getter values to data frame for example :
I have a simple class (person) and it has 2 objects (name and person),if i would like to get age values, i have to do run this simple instruction "person["age"]" and i get this result
An object of class "person"
Slot "val":
[1] 20 22 15 22 16
How can i transform it into data frame :
age
20
22
15
22
16
Thank you
this is dput result (forget about the other class ,person and human were just examples !)
new("Data"
, X = new("Signal"
, val = c(21, 22, 21, 22, 22, 24, 22, 23, 22, 22, 21)
)
, Y = new("Signal"
, val = c(11, 14, 13, 12, 12, 13, 12, 13, 14, 13, 13)
)
, Z = new("Signal"
, val = c(-130, -128, -129, -129, -129, -127, -128, -128, -128, -129,
-130)
)
)
this is setclass
.Signal.valid <- function(object){ return(TRUE)}
setClass (
Class ="Signal",
representation= representation(val="numeric"),
validity =.Signal.valid
)
rm (.Signal.valid )
I'm not sure if your class structure is the same as mine, but I'll give it a shot:
setClass("Signal", representation(val = "numeric"))
setClass("Data", representation(X = "Signal", Y = "Signal", Z = "Signal"))
obj <- new("Data", X = new("Signal", val = c(21, 22, 21, 22, 22, 24, 22, 23, 22, 22, 21)),
Y = new("Signal", val = c(11, 14, 13, 12, 12, 13, 12, 13, 14, 13, 13)),
Z = new("Signal", val = c(-130, -128, -129, -129, -129, -127, -128, -128, -128, -129, -130)))
data.frame(obj#X#val, obj#Y#val, obj#Z#val)
obj.X.val obj.Y.val obj.Z.val
1 21 11 -130
2 22 14 -128
3 21 13 -129
4 22 12 -129
5 22 12 -129
6 24 13 -127
7 22 12 -128
8 23 13 -128
9 22 14 -128
10 22 13 -129
11 21 13 -130
Is that what you are trying to achieve?

Assign values based on "insufficient" look-up table

I have to look up some scores and assign percentile value based on a fixed look-up table.
I've tried to solve this problem for some time now, and I have read this and this SO thread, but without solving my problem. My problem is that the raw score can be bigger then the values in the look-up table, in such cases the biggest percentile value is prescribed.
I have a look-up table like this,
lookup <- structure(list(Percentile = c(99, 95, 90, 85, 80, 75, 70, 65, 60, 55, 50, 45, 40, 35, 30, 25, 20, 15, 10, 5, 1), ACB = c(24, 19, 18, 17, 16, NA, 15, NA, 14, NA, NA, 13, NA, NA, NA, 12, NA, 11, 10, 9, 7), DFG = c(49, 39, 36, 33, 31, 30, 29, 28, 27, 26, 25, NA, 24, 23, 22, 21, 20, 19, 17, 14, 12), EIH = c(35, 30, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, NA, 14, NA, 13, 12, NA), GKJ = c(49, 39, 36, 33, 31, 30, 29, 28, 27, 26, 25, NA, 24, 23, 22, 21, 19, 18, 17, 15, 14), Total = c(112, 99, 91, 86, 82, 79, 76, 75, 73, 71, 69, 67, 66, 65, 63, 61, 59, 55, 51, 46, 39)), .Names = c("Percentile", "ACB", "DFG", "EIH", "GKJ", "Total"), row.names = c("99+", "95", "90", "85", "80", "75", "70", "65", "60", "55", "50", "45", "40", "35", "30", "25", "20", "15", "10", "5", "1"), class = "data.frame")
lookup
Percentile ACB DFG EIH GKJ Total
99+ 99 24 49 35 49 112
95 95 19 39 30 39 99
90 90 18 36 27 36 91
85 85 17 33 26 33 86
80 80 16 31 25 31 82
75 75 NA 30 24 30 79
70 70 15 29 23 29 76
65 65 NA 28 22 28 75
60 60 14 27 21 27 73
55 55 NA 26 20 26 71
50 50 NA 25 19 25 69
45 45 13 NA 18 NA 67
40 40 NA 24 17 24 66
35 35 NA 23 16 23 65
30 30 NA 22 15 22 63
25 25 12 21 NA 21 61
20 20 NA 20 14 19 59
15 15 11 19 NA 18 55
10 10 10 17 13 17 51
5 5 9 14 12 15 46
1 1 7 12 NA 14 39
and, som raw data that looks like this,
rawS_1 <- structure(list(ACB = 28, DFG = 39, EIH = 31, GKJ = NA_real_, Total = NA_real_), .Names = c("ACB", "DFG", "EIH", "GKJ", "Total"), row.names = "RawScore for ID 1", class = "data.frame")
rawS_1
ACB DFG EIH GKJ Total
RawScore for ID 1 28 39 31 NA NA
rawS_2 <- structure(list(ACB = 29, DFG = 51, EIH = 56, GKJ = 60, Total = 169), .Names = c("ACB", "DFG", "EIH", "GKJ", "Total"), row.names = "RawScore for ID 2", class = "data.frame")
rawS_2
ACB DFG EIH GKJ Total
RawScore for ID 2 29 51 56 60 169
and, this is what I would like to do,
ACB DFG EIH GKJ Total
RawScore for ID 1 12 39 19 NA NA
Percentile, ID 1 25 95 50 NA NA
ACB DFG EIH GKJ Total
RawScore for ID 2 29 51 56 60 169
Percentile, ID 2 99 99 99 99 99
I've tried to merge() with all.x = TRUE and suffixes = c(".x",".y")), but I keep getting what I don't want and help would be appreciated.
Rather than thinking of this as merging, I think you're better off thinking it as a problem of creating a function: you want a function that when given the raw value of (e.g.) ACB returns the percentile. Luckily R has a function designed to make a function from a table of numbers: approxfun.
The following code uses lapply to create a function for each column, and then shows how to call the the new functions:
vars <- names(lookup)[-1]
lookup_funs <- lapply(vars, function(var) {
df <- data.frame(x = lookup[[var]], y = lookup$Percentile)
df <- df[complete.cases(df), ]
approxfun(df$x, df$y, "constant", rule = 2)
})
names(lookup_funs) <- vars
lookup_funs$ACB(c(12, 29))
lookup_funs$Total(169)
Basic strategy is to use !is.na(vec) to index both the value and the percetile vectors. Here's a look at a single case. Which one would you prefer for input of 11 for ACB?
> rev(lookup$Percentile)[!is.na(lookup$ACB)][
findInterval( 11, c(-Inf,rev(lookup$ACB[!is.na(lookup$ACB)]), Inf))]
[1] 20
> rev(lookup$Percentile)[!is.na(lookup$ACB)][
findInterval( 11, c(-Inf,rev(lookup$ACB[!is.na(lookup$ACB)]), Inf))-1]
[1] 15
This gets you most of the way there for one row of data:
> for(i in names(rawS_1) ) {print(rawS_1[i]); print(rev(lookup$Percentile)[ !is.na(lookup[[i]]) ][ findInterval( rawS_1[i], c( rev( lookup[[i]][ !is.na(lookup[[i]] )]) ) )] )}
ACB
RawScore for ID 1 28
[1] 99
DFG
RawScore for ID 1 39
[1] 95
EIH
RawScore for ID 1 31
[1] 90
GKJ
RawScore for ID 1 NA
[1] NA
Total
RawScore for ID 1 NA
[1] NA
You do get into indexing overruns with the subtraction of 1 from indices at the high end of the scale, so you probably ought to add an extra element on the lookup vector after you decide what result you want to see.
for(i in names(rawS_2) ) {print(rawS_2[i]); print(rev(lookup$Percentile)[ !is.na(lookup[[i]]) ][ findInterval( rawS_2[i], c( rev( lookup[[i]][ !is.na(lookup[[i]] )]) ) )] )}
ACB
RawScore for ID 2 29
[1] 99
DFG
RawScore for ID 2 51
[1] 99
EIH
RawScore for ID 2 56
[1] 95
GKJ
RawScore for ID 2 60
[1] 99
Total
RawScore for ID 2 169
[1] 99

Resources