Tidyverse: Error in as.matrix : attempt to apply non-function - r

I am trying to calculate SPEI values using SPEI package and Hargreaves method. I want to automate the process so that I can calculate SPEI for all 6 stations in one go and save them to a new file spei.3.
SPEI is calculated in three steps. First, we calculate PET values (spei_pet), which is then subtracted from Precipitation value to calculate climatic water balance (spei_cwbal). The CWBAL value is then used in SPEI function from the package of the same name with a scale to calculate SPEI values.
I am new to R and very new to tidyverse, but the internet says they are easier to work on. I wrote the code below to do my task. But I am surely missing something (or maybe, many things) because the code throws an error. Please help me identify error in my code, and help me get a solution.
library(tidyverse)
library(SPEI)
file_path = "I:/Proj/Excel sheets - climate/SPI/heatmap/spei_forecast_data.xlsx"
file_forecast = openxlsx::read.xlsx(file_path)
##spei calculation
spei.scale = c(3, 6, 9, 12, 15, 24)
stations = c(1:3, 5:7)
lat = c(23.29, 23.08, 22.95, 22.62, 22.43, 22.40)
lat.fn = function(i) {
if (i <= 3)
lat.fn = lat[i]
else if (i == 5)
lat.fn = lat[4]
else if (i == 6)
lat.fn = lat[5]
else if (i == 7)
lat.fn = lat[6]
}
for ( i in stations) {
file_forecast %>%
mutate(spei_pet[i] <- hargreaves(Tmin = file_forecast$paste("tmin", i),
Tmax = file_forecast$paste("tmax", i),
Pre = file_forecast$paste("p", i),
lat = lat.fn[i])) %>%
mutate(spei_cwbal[i] <- spei_pet[[i]] - file_forecast$paste("p", i)) %>%
mutate(spei.3[i] <- spei(spei_cwbal[[i]], scale = 3))
}
It throws an error
Error in as.matrix(Tmin) : attempt to apply non-function
lat.fn[i] also throws an error, which gets rectified if I use no i. But I need to use some kind of function so that lat.fn takes different value depending on i.
Error in lat.fn[i] : object of type 'closure' is not subsettable
Thanks.
Edit: The data is in the form of a data.frame. I converted it into a tibble to give an idea of what it looks like.
> file_forecast
# A tibble: 960 x 20
Month p7 p6 p5 p3 p2 p1 tmax7 tmax6 tmax5 tmax3 tmax2 tmax1 tmin7 tmin6 tmin5 tmin3 tmin2 tmin1
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Jan 0.162 0.185 0.293 0.436 0.529 0.658 26.4 26.5 26.2 25.9 25.7 24.9 9.57 9.75 10.0 10.4 9.94 9.77
2 Feb 0.207 0.305 0.250 0.260 0.240 0.186 32.2 32.2 32.1 31.9 31.8 30.9 12.4 12.7 12.7 13.0 12.2 11.9
3 Mar 0.511 0.650 0.602 0.636 0.625 0.501 37.3 37.1 37.1 37.0 36.9 36.1 18.7 19.3 18.3 18.0 17.3 16.9
4 Apr 0.976 1.12 1.05 1.12 1.17 1.16 39.5 39.2 39.6 39.5 39.5 38.8 22.8 23.2 22.5 22.2 21.7 20.8
5 May 3.86 4.12 3.76 4.29 4.15 3.84 38.2 37.9 38.3 38.1 38.2 37.6 25.1 25.4 24.9 24.7 24.5 23.8
6 Jun 7.31 8.27 7.20 8.51 9.14 8.76 38.0 37.6 38.1 38.0 38.0 37.7 27.2 27.3 26.9 26.7 26.6 26.1
7 Jul 13.9 15.6 13.2 17.0 19.1 17.8 33.9 33.6 34.0 33.9 33.8 33.5 26.8 26.9 26.6 26.5 26.4 26.0
8 Aug 15.2 17.2 14.4 18.6 20.1 18.4 32.6 32.4 32.7 32.4 32.3 32.0 26.2 26.4 26.1 25.9 25.9 25.4
9 Sep 11.4 11.9 10.5 12.9 13.2 13.1 31.9 31.9 31.8 31.5 31.5 30.9 24.4 24.6 24.3 24.3 24.3 23.7
10 Oct 5.19 5.76 4.81 5.40 5.44 5.04 29.8 30.0 29.6 29.3 29.3 28.6 20.9 21.1 20.8 20.9 20.8 20.2
# ... with 950 more rows, and 1 more variable: year <dbl>

Related

Obtaining hourly average data from 1 minute dataframe

I have a data set in 1 minute interval, but I am looking for a way to convert it to hourly average. I am new to R programming for data analysis. Below is an example of how my data looks.
Please if there are other easy ways besides using R to solve this issue, kindly specify. I hope to hear from anyone soon
TimeStamp TSP PM10 PM2.5 PM1 T RH
1 01/12/2022 14:08 44.3 14.2 6.97 3.34 32.9 53.2
2 01/12/2022 14:09 40.3 16.9 7.10 3.52 33.1 53.1
3 01/12/2022 14:10 36.5 15.6 7.43 3.64 33.2 53.1
4 01/12/2022 14:11 33.0 16.5 7.29 3.40 33.2 52.6
5 01/12/2022 14:12 41.3 18.2 7.73 3.41 33.3 52.9
6 01/12/2022 14:13 38.5 16.3 7.54 3.44 33.3 53.3
7 01/12/2022 14:14 38.5 18.5 6.80 3.14 33.2 53.6
8 01/12/2022 14:15 30.7 17.1 6.86 3.33 33.2 53.7
9 01/12/2022 14:16 32.5 18.3 8.56 4.42 33.3 53.5
10 01/12/2022 14:17 26.4 15.6 9.34 4.70 33.4 53.0
11 01/12/2022 14:18 23.8 14.6 7.56 3.97 33.4 52.5
12 01/12/2022 14:19 18.1 11.4 6.15 3.08 33.4 51.7
13 01/12/2022 14:20 22.4 12.2 6.43 3.49 33.5 50.9
14 01/12/2022 14:21 17.9 12.9 6.03 3.15 33.6 50.9
15 01/12/2022 14:22 18.6 12.8 5.87 3.19 33.7 50.7
16 01/12/2022 14:23 22.3 10.7 5.49 2.74 33.7 50.6
17 01/12/2022 14:24 18.1 9.2 4.87 2.52 33.7 49.9
18 01/12/2022 14:25 19.2 13.0 5.12 2.65 33.7 50.2
19 01/12/2022 14:26 19.0 10.3 5.01 2.78 33.9 50.0
20 01/12/2022 14:27 20.0 10.3 4.78 2.57 34.0 49.4
21 01/12/2022 14:28 14.1 9.6 4.71 2.45 34.1 49.0
22 01/12/2022 14:29 16.1 10.3 4.83 2.68 34.1 48.9
23 01/12/2022 14:30 13.9 10.0 5.21 2.99 34.2 49.5
24 01/12/2022 14:31 27.3 11.5 5.90 2.94 34.2 49.7
25 01/12/2022 14:32 23.8 12.8 5.77 2.97 34.2 49.6
26 01/12/2022 14:33 19.3 12.4 5.92 3.29 34.3 49.6
27 01/12/2022 14:34 30.9 14.4 6.10 3.22 34.3 49.3
28 01/12/2022 14:35 30.5 15.0 5.73 2.98 34.3 49.9
29 01/12/2022 14:36 24.7 13.9 6.17 3.17 34.3 50.0
30 01/12/2022 14:37 27.0 12.3 6.16 3.14 34.2 50.2
31 01/12/2022 14:38 27.0 12.4 5.65 3.28 34.2 50.3
32 01/12/2022 14:39 22.2 12.5 5.51 3.10 34.2 50.2
33 01/12/2022 14:40 19.0 11.6 5.46 3.06 34.1 50.3
34 01/12/2022 14:41 24.3 14.3 5.45 3.01 34.1 50.2
35 01/12/2022 14:42 17.6 10.9 5.64 3.30 34.1 50.5
36 01/12/2022 14:43 20.9 10.1 5.80 3.26 34.0 51.0
37 01/12/2022 14:44 19.0 11.7 5.93 3.27 33.9 50.9
38 01/12/2022 14:45 25.7 15.6 6.20 3.40 33.9 51.1
39 01/12/2022 14:46 20.1 14.4 6.08 3.39 34.0 51.3
40 01/12/2022 14:47 14.8 11.1 5.91 3.44 34.1 50.9
I have tried several methods I got via my research but non seems to work for me. Below are the codes I have tried
ref.data.hourly <- ref.data %>%
group_by(hour = format (as.POSIXct(cut(TimeStamp, break = "hour")), "%H")) %>%
summarise(meanval = mean(val, na.rm = TRUE))
I have also tried this
ref.data$TimeStamp <- as.POSIXct(ref.data$TimeStamp, format = "%d/%m/%Y %H:%M")
ref.data.xts$TimeStamp <- NULL
ref.data$TimeStamp <- strptime(ref.data$TimeStamp, "%d/%m/%Y %H:%M")
ref.data$group <- cut(ref.data$TimeStamp, breaks = "hour")
Your first attempt seems sensible to me. Lacking further info about your data or a specific error message, I assume the problem is handling the date-time formatting (or actually using cut() with date-time values).
A workaround is to convert the dates to character (if they aren't yet) and then just omit the minutes. Given that as.character(ref.data$timeStamp) is consistently formatted like e.g. 01/12/2022 14:08, you can do the following:
ref.data.hourly <- ref.data %>%
mutate(hour_grps = substr(as.character(TimeStamp), 1, 13)) %>%
group_by(hour_grps) %>%
summarise(meanval = mean(val, na.rm = TRUE))
I don't think this is good practice because it will break if you use the same code on slightly different formatted data. For instance, if the code were used on a computer with different locale, the date-time formatting used with as.character() may change. So please consider this a quick fix, not a permanent solution.

R output giving wrong values for difference between columns

I have this tibble called data1:
structure(list(subject = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12), treatment = c("099526-01", "099526-01", "099526-01", "099526-01",
"099526-01", "099526-01", "099526-01", "099526-01", "099526-01",
"099526-01", "099526-01", "099526-01"), T0 = c(34.35, 26.5, 29.65,
11.575, 34.4, 25.775, 33, 31.6, 18.35, 36.275, 36.075, 34.225
), T15min = c(34.85, 28.95, 30.2, 11.05, 34.1, 22.025, 25.325,
31.775, 17.8, 31.7, 35.35, 34.25), T2h = c(33.425, 26.125, 27.65,
11.475, 36.95, 22.975, 30.025, 31.775, 18.025, 33.025, 34.125,
34.55), T4h = c(35.7, 26.075, 29.3, 13.275, 36.45, 28.475, 30.925,
32.15, 17.425, 34.95, 34.55, 34.775), T6h = c(36.225, 28.15,
29.1, 12.25, 34.275, 26.05, 28.1, 34.025, 17.775, 35.3, 35.125,
36.725), T8h = c(34.9, 25.75, 30.425, 10.75, 34.425, 28.725,
28.475, 34.35, 19.325, 33.925, 36.95, 38.225)), row.names = c(NA,
-12L), class = c("tbl_df", "tbl", "data.frame"))
subject treatment T0 T15min T2h T4h T6h T8h
<dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 099526-01 34.4 34.8 33.4 35.7 36.2 34.9
2 2 099526-01 26.5 29.0 26.1 26.1 28.2 25.8
3 3 099526-01 29.6 30.2 27.6 29.3 29.1 30.4
4 4 099526-01 11.6 11.0 11.5 13.3 12.2 10.8
5 5 099526-01 34.4 34.1 37.0 36.4 34.3 34.4
6 6 099526-01 25.8 22.0 23.0 28.5 26.0 28.7
7 7 099526-01 33 25.3 30.0 30.9 28.1 28.5
8 8 099526-01 31.6 31.8 31.8 32.2 34.0 34.4
9 9 099526-01 18.4 17.8 18.0 17.4 17.8 19.3
10 10 099526-01 36.3 31.7 33.0 35.0 35.3 33.9
11 11 099526-01 36.1 35.4 34.1 34.6 35.1 37.0
12 12 099526-01 34.2 34.2 34.6 34.8 36.7 38.2
I'm creating a new tibble with new columns as the difference of times to T0 (e.g., T15min-T0, T2h-T0), as follows:
data2 <- data1 %>%
mutate(delta_1 = .[[4]] - .[[3]],
delta_2 = .[[5]] - .[[3]],
delta_3 = .[[6]] - .[[3]],
delta_4 = .[[7]] - .[[3]],
delta_5 = .[[8]] - .[[3]])
subject treatment T0 T15min T2h T4h T6h T8h delta_1 delta_2 delta_3 delta_4 delta_5
<dbl> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 099526-01 34.4 34.8 33.4 35.7 36.2 34.9 0.5 -0.925 1.35 1.88 0.550
2 2 099526-01 26.5 29.0 26.1 26.1 28.2 25.8 2.45 -0.375 -0.425 1.65 -0.75
3 3 099526-01 29.6 30.2 27.6 29.3 29.1 30.4 0.550 -2 -0.350 -0.550 0.775
4 4 099526-01 11.6 11.0 11.5 13.3 12.2 10.8 -0.525 -0.100 1.70 0.675 -0.825
5 5 099526-01 34.4 34.1 37.0 36.4 34.3 34.4 -0.300 2.55 2.05 -0.125 0.0250
6 6 099526-01 25.8 22.0 23.0 28.5 26.0 28.7 -3.75 -2.80 2.70 0.275 2.95
7 7 099526-01 33 25.3 30.0 30.9 28.1 28.5 -7.68 -2.98 -2.07 -4.9 -4.52
8 8 099526-01 31.6 31.8 31.8 32.2 34.0 34.4 0.175 0.175 0.550 2.43 2.75
9 9 099526-01 18.4 17.8 18.0 17.4 17.8 19.3 -0.550 -0.325 -0.925 -0.575 0.975
10 10 099526-01 36.3 31.7 33.0 35.0 35.3 33.9 -4.57 -3.25 -1.32 -0.975 -2.35
11 11 099526-01 36.1 35.4 34.1 34.6 35.1 37.0 -0.725 -1.95 -1.53 -0.950 0.875
12 12 099526-01 34.2 34.2 34.6 34.8 36.7 38.2 0.0250 0.325 0.550 2.50 4
However, the differences are not correct. For example, for the first subject, T2h - T0 (33.4 - 34.4) should result -1, and not -0.925
What could be wrong with the code?
The code is correct. However, it appears that the df output (View) is limited to 1 decimal, while your values have 3 decimals.
Try running
options(digits = 5)
at the top of your script.

Why is dplyr group_by and summarize not working?

I'm trying to get the mean for each sub-dataset in my dataset, but my output just gives me the mean for the whole dataset for each sub-dataset. I think it might be an issue with the way my dataset is structured: the data consists of x and y observations for 13 sub-datasets that have the following names:dino, away, h_lines, v_lines, x_shape, star, high_lines, dots, circle, bullseye, slant_up, slant_down, wide_lines. The sub-dataset names are listed in a column called "dataset" (see example picture below).
dataset snippet
I'm using the dplyr functions group_by() and summarize(). I've seen so many examples where this works, so I'm not sure where I'm going wrong.
This is what I've tried
dinodata%>%
dplyr::group_by(dataset)%>%
dplyr::summarize(mean_x = mean(x),
mean_y = mean(y),
sd_x = sd(x),
sd_y = sd(y),
correlation = cor(x,y)
)
and this is the output
# A tibble: 13 x 6
dataset mean_x mean_y sd_x sd_y correlation
<chr> <dbl> <dbl> <dbl> <dbl> <dbl>
1 away 54.3 47.8 16.8 26.9 -0.0641
2 bullseye 54.3 47.8 16.8 26.9 -0.0686
3 circle 54.3 47.8 16.8 26.9 -0.0683
4 dino 54.3 47.8 16.8 26.9 -0.0645
5 dots 54.3 47.8 16.8 26.9 -0.0603
6 h_lines 54.3 47.8 16.8 26.9 -0.0617
7 high_lines 54.3 47.8 16.8 26.9 -0.0685
8 slant_down 54.3 47.8 16.8 26.9 -0.0690
9 slant_up 54.3 47.8 16.8 26.9 -0.0686
10 star 54.3 47.8 16.8 26.9 -0.0630
11 v_lines 54.3 47.8 16.8 26.9 -0.0694
12 wide_lines 54.3 47.8 16.8 26.9 -0.0666
13 x_shape 54.3 47.8 16.8 26.9 -0.0656
The means and standard deviations are calculating the same as if I did mean(dinodata$x) and sd(dinodata$x) which is not what I want. I want the mean for each sub-dataset for x and y, etc.

Draw regression line per row in R

I have the following data.
HEIrank1
HEI.ID X2007 X2008 X2009 X2010 X2011 X2012
1 OP 41.8 147.6 90.3 82.9 106.8 63.0
2 MO 20.0 20.8 21.1 20.9 12.6 20.6
3 SD 21.2 32.3 25.7 23.9 25.0 40.1
4 UN 51.8 39.8 19.9 20.9 21.6 22.5
5 WS 18.0 19.9 15.3 13.6 15.7 15.2
6 BF 11.5 36.9 20.0 23.2 18.2 23.8
7 ME 34.2 30.3 28.4 30.1 31.5 25.6
8 IM 7.7 18.1 20.5 14.6 17.2 17.1
9 OM 11.4 11.2 12.2 11.1 13.4 19.2
10 DC 14.3 28.7 20.1 17.0 22.3 16.2
11 OC 28.6 44.0 24.9 27.9 34.0 30.7
12 TH 7.4 10.0 5.8 8.8 8.7 8.6
13 CC 12.1 11.0 12.2 12.1 14.9 15.0
14 MM 11.7 24.2 18.4 18.6 31.9 31.7
15 MC 19.0 13.7 17.0 20.4 20.5 12.1
16 SH 11.4 24.8 26.1 12.7 19.9 25.9
17 SB 13.0 22.8 15.9 17.6 17.2 9.6
18 SN 11.5 18.6 22.9 12.0 20.3 11.6
19 ER 10.8 13.2 20.0 11.0 14.9 14.2
20 SL 44.9 21.6 21.3 26.5 17.0 8.0
I try following commends to draw regression line for each HEIs.
year <- c(2007 , 2008 , 2009 , 2010 , 2011, 2012)
op <- as.numeric(HEIrank1[1,])
lm.r <- lm(op~year)
plot(year, op)
abline(lm.r)
I want to draw to draw regression line for each college in one graph and I do not how.can you help me.
Here's my approach with ggplot2 but the graph is uninterpretable with that many lines.
library(ggplot2);library(reshape2)
mdat <- melt(HEIrank1, variable.name="year")
mdat$year <- as.numeric(substring(mdat$year, 2))
ggplot(mdat, aes(year, value, colour=HEI.ID, group=HEI.ID)) +
geom_point() + stat_smooth(se = FALSE, method="lm")
Faceting may be a better way to got:
ggplot(mdat, aes(year, value, group=HEI.ID)) +
geom_point() + stat_smooth(se = FALSE, method="lm") +
facet_wrap(~HEI.ID)

Shift time series

I have 2 weekly time-series, which show a small correlation (~0.33).
How can i 'shift in time' one of these series, so that i can check if there's a
greater correlation in the data?
Example data:
x = textConnection('1530.2 1980.9 1811 1617 1585.4 1951.8 2146.6 1605 1395.2 1742.6 2206.5 1839.4 1699.1 1665.9 2144.7 2189.1 1718.4 1615.5 2003.3 2267.6 1772.1 1635.2 1836 2261.8 1799.1 1634.9 1638.6 2056.5 2201.4 1726.8 1586.4 1747.9 1982 1695.2 1624.9 1652.4 2011.9 1788.8 1568.4 1540.7 1866.1 2097.3 1601.3 1458.6 1424.4 1786.9 1628.4 1467.4 1476.2 1823 1736.7 1482.7 1334.2 1871.9 1752.9 1471.6 1583.2 1601.4 1987.7 1649.6 1530.9 1547.1 2165.2 1852 1656.9 1605.2 2184.6 1972 1617.6 1491.1 1709.5 2042.2 1667.1 1542.6 1497.6 2090.5 1816.8 1487.5 1468.2 2228.5 1889.9 1690.8 1395.7 1532.8 1934.4 1557.1 1570.6 1453.2 1669.6 1782 1526.1 1411 1608.1 1740.5 1492.3 1477.8 1102.6 1366.1 1701.1 1500.6 1403.2 1787.2 1776.6 1465.3 1429.5')
x = scan(x)
y = textConnection('29.8 22.6 26 24.8 28.9 27.3 26 29.2 28.2 23.9 24.5 23.6 21.1 22 20.7 19.9 22.8 25 21.6 19.1 27.2 23.7 24.2 22.4 25.5 25.4 23.4 24.7 27.4 23.4 25.8 28.8 27.7 23.7 22.9 29.4 22.6 28.6 22.2 27.6 26.2 26.2 29.8 31.5 24.5 28.7 25.9 26.9 25.9 30.5 30.5 29.4 29.3 31.4 30 27.9 28.5 26.4 29.5 28.4 25.1 24.6 21.1 23.6 20.5 23.7 25.3 20.2 23.4 21.1 23.1 24.6 20.7 20.7 26.9 24.1 24.7 25.8 26.7 26 28.9 29.5 27.4 22.1 31.6 25 27.4 30.4 28.9 27.4 22.5 28.4 28.7 31.1 29.3 28.3 30.6 28.6 26 26.2 26.2 26.7 25.6 31.5 30.9')
y = scan(y)
I'm using R with dtw package, but i'm not familiar with these kind of algorithms.
Thanks for any help!
You could try the ccf() function in base R. This estimates the cross-correlation function of the two time series.
For example, using your data (see below if interested in how I got the data you pasted into your Question into R objects x and y)
xyccf <- ccf(x, y)
yielding
> xyccf
Autocorrelations of series ‘X’, by lag
-17 -16 -15 -14 -13 -12 -11 -10 -9 -8 -7
0.106 0.092 0.014 0.018 0.011 0.029 -0.141 -0.153 -0.107 -0.141 -0.221
-6 -5 -4 -3 -2 -1 0 1 2 3 4
-0.274 -0.175 -0.277 -0.176 -0.217 -0.253 -0.339 -0.274 -0.267 -0.330 -0.278
5 6 7 8 9 10 11 12 13 14 15
-0.184 -0.120 -0.200 -0.156 -0.184 -0.062 -0.076 -0.117 -0.048 0.015 -0.016
16 17
-0.038 -0.029
and this plot
To interpret this, when the lag is positive, y is leading x whereas when the lag is negative x is leading y.
Reading your data into R...
x <- scan(text = "1530.2 1980.9 1811 1617 1585.4 1951.8 2146.6 1605 1395.2 1742.6
2206.5 1839.4 1699.1 1665.9 2144.7 2189.1 1718.4 1615.5 2003.3
2267.6 1772.1 1635.2 1836 2261.8 1799.1 1634.9 1638.6 2056.5
2201.4 1726.8 1586.4 1747.9 1982 1695.2 1624.9 1652.4 2011.9
1788.8 1568.4 1540.7 1866.1 2097.3 1601.3 1458.6 1424.4 1786.9
1628.4 1467.4 1476.2 1823 1736.7 1482.7 1334.2 1871.9 1752.9
1471.6 1583.2 1601.4 1987.7 1649.6 1530.9 1547.1 2165.2 1852
1656.9 1605.2 2184.6 1972 1617.6 1491.1 1709.5 2042.2 1667.1
1542.6 1497.6 2090.5 1816.8 1487.5 1468.2 2228.5 1889.9 1690.8
1395.7 1532.8 1934.4 1557.1 1570.6 1453.2 1669.6 1782 1526.1
1411 1608.1 1740.5 1492.3 1477.8 1102.6 1366.1 1701.1 1500.6
1403.2 1787.2 1776.6 1465.3 1429.5")
y <- scan(text = "29.8 22.6 26 24.8 28.9 27.3 26 29.2 28.2 23.9 24.5 23.6 21.1 22
20.7 19.9 22.8 25 21.6 19.1 27.2 23.7 24.2 22.4 25.5 25.4 23.4
24.7 27.4 23.4 25.8 28.8 27.7 23.7 22.9 29.4 22.6 28.6 22.2 27.6
26.2 26.2 29.8 31.5 24.5 28.7 25.9 26.9 25.9 30.5 30.5 29.4 29.3
31.4 30 27.9 28.5 26.4 29.5 28.4 25.1 24.6 21.1 23.6 20.5 23.7
25.3 20.2 23.4 21.1 23.1 24.6 20.7 20.7 26.9 24.1 24.7 25.8 26.7
26 28.9 29.5 27.4 22.1 31.6 25 27.4 30.4 28.9 27.4 22.5 28.4 28.7
31.1 29.3 28.3 30.6 28.6 26 26.2 26.2 26.7 25.6 31.5 30.9")

Resources