Converting zoo object into a weekly time series - r

I am working on building a time series in R programming language.
I m having a zoo object which is follows:
I 'd like to convert this into a weekly time series data for analysis and typed in the following code
tt2<-as.ts(zz,freq=365.25/7,start=decimal_date(ymd("2018-01-01")))
tt2[is.na(tt2)]<-0
However, I get the following output:
Time Series:
Start = 17538
End = 18532
Frequency = 0.142857142857143
While I'd like to see the output in line with something like this:
Time Series:
Start = c(2018,2)
End = c(2020,40)
Frequency = 52
or since we can have both 53 and 52 weeks, something like:
Time Series:
Start = 1991.0848733744
End = 2005.34360027378
Frequency = 52.1785714285714
I also tried to do the following ,
library(zoo)
zz <- read.zoo(data, split = 1, index = 2,FUN=as.week")
and converted the following into the format:
However, if i try to convert this into a time series, I receive the following output:
Time Series:
Start = 2505
End = 2647
Frequency = 1
[1] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
[40] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
[79] NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
[118] NA NA NA NA NA NA NA NA NA 64 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
I'd be keen to receive your thoughts on this

I suppose using tsibble would more easier to convert your series from daily frequency to weekly frequency. At the end you can change to zoo object again.
Here is a short code on what I done
data
# A tibble: 14 x 2
Date Y
<date> <dbl>
1 2020-01-01 0.176
2 2020-01-02 0.521
3 2020-01-03 0.348
4 2020-01-04 0.801
5 2020-01-05 0.963
6 2020-01-06 0.0723
7 2020-01-07 0.638
8 2020-01-08 0.842
9 2020-01-09 0.298
10 2020-01-10 0.902
11 2020-01-11 0.943
12 2020-01-12 0.884
13 2020-01-13 0.266
14 2020-01-14 0.789
library(tsibble)
library(tidyverse)
library(zoo)
data$Date<-as.Date(data$Date)
data.w<-data%>%as_tsibble(index=Date)%>% index_by(year_week = ~ yearweek(.)) %>% summarise(weekly = sum(Y, na.rm = TRUE))
data.z<-zoo(data.w)
> data.z
year_week weekly
1 2020 W01 2.809756
2 2020 W02 4.579329
3 2020 W03 1.055690

Related

Nested loop in R with two level of variation

Hello dear stack overflow community,
Here is the context of my problem : I have a dataframe with each column corresponding to one bat species and and each row corresponds to the acoustic activity measured for one night (for each night of recording not all the species as been sampled).
eg :
> Dataset
Bba Ese Hsa Mda Mda.Mca Mema Mpu
1 3 NA NA NA 33 NA NA
2 NA NA NA NA 1 NA NA
3 2 4 1 NA 19 1 NA
4 NA NA NA NA 25 NA NA
5 NA NA NA NA 3 NA NA
6 1 1 NA NA 53 NA NA
7 1 NA 9 NA NA 1 NA
8 NA NA 10 NA NA NA NA
9 NA NA NA NA NA NA NA
10 1 1 NA NA NA NA NA
11 6 NA NA NA NA NA NA
12 12 NA 1 NA NA 1 NA
13 3 NA 2 NA NA 1 NA
14 1 NA NA NA NA NA NA
15 NA NA NA NA NA NA NA
16 1 NA NA NA NA NA NA
17 2 NA NA NA NA 2 NA
18 1 1 NA NA NA NA 1
19 NA NA NA NA NA NA NA
20 1 1 NA NA NA NA NA
21 2 NA 1 NA NA NA NA
22 1 NA NA NA NA 4 NA
23 1 NA 1 NA NA 1 NA
24 NA NA NA NA NA 2 NA
25 1 NA NA NA NA NA NA
26 1 NA NA NA NA 1 NA
27 1 NA NA NA NA NA NA
28 5 NA NA NA NA NA NA
29 NA NA NA NA NA NA NA
.....
To study vocal activity I am checking the quantile of bat vocal activity per species
apply(Dataset[,9:15],2,quantile, na.rm=TRUE, type=7, c(0.02,0.25,0.5,0.75,0.98))
Bba Ese Hsa Mda Mda.Mca Mema Mpu
2% 1.00 1.00 1.00 1.00 1.00 1.00 1
25% 1.00 1.00 2.00 2.00 2.00 1.00 1
50% 3.00 4.00 6.00 4.00 3.00 2.00 2
75% 9.75 12.00 18.00 12.00 20.00 4.00 6
98% 53.86 69.88 166.12 313.32 159.04 27.28 44
To test the impact of sampling (number of night) on my quantile estimate, I want to do a boostrap. More specifically, I want to calculate the mean of the bat activity if I take only 3 night per species using 1000 random sample with replacement. And i want to do it If I take from 3 to 70 nights.
This is what I have so far (for one species):
Bbana<-as.data.frame(Bbana)
L= length(Bbana[,1])
B= 1000
m<-list()
for (j in 3:70) {
for (i in 1 : B) {
idx<-sample(1:L, j, replace=TRUE)
data_idx<-Bbana[idx, ]
m[i]<-mean(data_idx)
}}
Somehow it didn't give my what I am expected : 67 list with 1000 means of bat activity.
Could anyone help me ?
(I don't know if it's clear enough...)
Thanks in advance
if you want to stick to loops and lists:
for (j in 3:70) {
mat = matrix(NA, nrow = B, ncol = ncol(idx))
for (i in 1 : B) {
idx<-sample(1:L, j, replace=TRUE)
data_idx<-Bbana[idx, ]
mat[i,] = colMeans(data_idx, na.rm = TRUE)
}
m[[j]] = mat
}
Otherwise, this option should work (and should be more efficient / convenient to use):
sample.fun = function(nb.nights, dataset){
# select randomly nb.nights rows to sample
selected.rows = sample(1:nrow(dataset), nb.nights, replace = FALSE)
# return a vector with their means
return(colMeans(dataset[select.rows,], na.rm = TRUE))
}
sapply(3:67, function(nights) replicate(1000, sample.fun(nights, dataset), simplify = 'array'), simplify = FALSE)
This will return you a list of 67 elements that each contains a dataframe of 1000 rows (1000 means per species)

ts() frequency for a yearly data series of 30 min frequency observations

I want to create a ts() object from a dataframe for forecasting physical phenomena.
My data has a 30 min frequency over a period of 1 year (1-1-2018 to 12-31-2018) also I have observed that my data has a seasonality of 1 day.
> head(pleiadesGH.v2[,c("time", "humExt.R", "tempExt", "radExt", "vientoVelo")])
time humExt.R tempExt radExt vientoVelo
1 2018-01-01 00:00:00 NA NA NA NA
2 2018-01-01 00:30:00 36.78287 16.95125 -10.08125 3.68550
3 2018-01-01 01:00:00 38.56775 16.26350 -9.75000 2.38420
4 2018-01-01 01:30:00 38.76425 15.63470 -10.08125 2.71915
5 2018-01-01 02:00:00 39.61575 15.32030 -10.41250 3.70475
6 2018-01-01 02:30:00 37.48700 15.06485 -10.74375 2.51895
Based on this answers :
https://robjhyndman.com/hyndsight/seasonal-periods/
time series with 10 min frequency in R
I conclude that my ts() frequency should be of 48, because of 1 day have 48 observations.
ts.freq1 <- ts(data = pleiadesGH.v2[,2:ncol(pleiadesGH.v2)],
start = c(2018),
frequency = 48)
But the resulting ts() has a wrong time index as you can see below. The time data should be from 2018 to 2019 instead of 2400.
Time Series:
Start = c(2018, 1)
End = c(2383, 1)
Frequency = 48
humInt.R humInt.E tempInt tempMac humExt.R humExt.E radExt tempExt vientoVelo
2018.000 NA NA NA NA NA NA NA NA NA
2018.021 NA NA NA NA 36.78287 0.004410894 -10.08125 16.95125 3.6855000
2018.042 NA NA NA NA 38.56775 0.004427114 -9.75000 16.26350 2.3842000
2018.062 NA NA NA NA 38.76425 0.004273306 -10.08125 15.63470 2.7191500
2018.083 NA NA NA NA 39.61575 0.004280005 -10.41250 15.32030 3.7047500
2018.104 NA NA NA NA 37.48700 0.003982139 -10.74375 15.06485 2.5189500
2018.125 NA NA NA NA 35.84950 0.003735063 -10.41250 14.77010 3.2235000
2018.146 NA NA NA NA 36.68462 0.003697674 -8.75625 14.25920 1.4409500
2018.167 NA NA NA NA 41.48250 0.003954404 -11.07500 13.39460 1.5064000
2018.188 NA NA NA NA 42.54688 0.003968433 -9.41875 13.06055 3.6701000
2018.208 NA NA NA NA 43.05450 0.003969581 -9.08750 12.88370 1.6103500
2018.229 NA NA NA NA 44.11888 0.004000366 -9.41875 12.62825 1.3485500
2018.250 NA NA NA NA 46.26400 0.004061953 -9.08750 12.13700 1.9491500
2018.271 NA NA NA NA 46.88625 0.004084874 -9.08750 12.01910 2.0569500
2018.292 NA NA NA NA 49.57175 0.004187059
wrong plot due to time index
I also tried with this frequency:
ts.freq1 <- ts(data = pleiadesGH.v2[,2:ncol(pleiadesGH.v2)],
start = c(2018),
frequency = 365.25*24*60/30 )
Getting the following result :
Time Series:
Start = c(2018, 1)
End = c(2018, 17521)
Frequency = 17532
humInt.R humInt.E tempInt tempMac humExt.R humExt.E radExt tempExt vientoVelo
2018.000 NA NA NA NA NA NA NA NA NA
2018.000 NA NA NA NA 36.78287 0.004410894 -10.08125 16.95125 3.6855000
2018.000 NA NA NA NA 38.56775 0.004427114 -9.75000 16.26350 2.3842000
2018.000 NA NA NA NA 38.76425 0.004273306 -10.08125 15.63470 2.7191500
2018.000 NA NA NA NA 39.61575 0.004280005 -10.41250 15.32030 3.7047500
2018.000 NA NA NA NA 37.48700 0.003982139 -10.74375 15.06485 2.5189500
2018.000 NA NA NA NA 35.84950 0.003735063 -10.41250 14.77010 3.2235000
2018.000 NA NA NA NA 36.68462 0.003697674 -8.75625 14.25920 1.4409500
2018.000 NA NA NA NA 41.48250 0.003954404 -11.07500 13.39460 1.5064000
2018.001 NA NA NA NA 42.54688 0.003968433 -9.41875 13.06055 3.6701000
2018.001 NA NA NA NA 43.05450 0.003969581 -9.08750 12.88370 1.6103500
2018.001 NA NA NA NA 44.11888 0.004000366 -9.41875 12.62825 1.3485500
2018.001 NA NA NA NA 46.26400 0.004061953 -9.08750 12.13700 1.9491500
But this implicitly means that my seasonality is yearly but this is not my objective. In the following picture you can se the time index is now fixed despite the wrong seasonality
good index incorrect seasonality
What im doing wrong?
The solution is as follows:
freq.daily <- 48 # 24 hours * 2 obs per hour
ts.daily <- ts(data = pleiadesGH.v2.interp[,2:ncol(pleiadesGH.v2)],
start = c(1),
frequency = freq.daily)
Time Series:
Start = c(1, 1)
End = c(366, 1)
Frequency = 48
humInt.R humInt.E tempInt tempMac humExt.R humExt.E radExt
1.000000 74.56250 0.007699896 14.53500 13.625000 36.78287 0.004410894 -10.081250
1.020833 74.56250 0.007699896 14.53500 13.625000 36.78287 0.004410894 -10.081250
1.041667 74.56250 0.007699896 14.53500 13.625000 38.56775 0.004427114 -9.750000
1.062500 74.56250 0.007699896 14.53500 13.625000 38.76425 0.004273306 -10.081250
1.083333 74.56250 0.007699896 14.53500 13.625000 39.61575 0.004280005 -10.412500
1.104167 74.56250 0.007699896 14.53500 13.625000 37.48700 0.003982139 -10.743750
As this is the way ts simply and effectively manage dates, starting from 1.

Error in eval(expr, envir, enclos) : object not found when using eval

My code is as below:
Form_CharSizePorts2 <- function(main, size, var, wght, ret) {
main.cln <- main %>%
select(date, permno, exchcd, eval(parse(text=size)), eval(parse(text=var)), eval(parse(text=wght)), eval(parse(text=ret))) %>%
data.table
Bkpts.NYSE <- main.cln %>%
filter(exchcd == 1) %>%
group_by(date) %>%
summarize(var.P70 = quantile(.[[var]], probs=.7, na.rm=TRUE),
var.P30 = quantile(.[[var]], probs=.3, na.rm=TRUE),
size.Med = quantile(.[[size]], probs=.5, na.rm=TRUE))
main.rank <- main.cln %>%
merge(Bkpts.NYSE, by="date", all.x=TRUE) %>%
mutate(Size = ifelse(.[[size]]<size.Med, "Small", "Big"),
Var = ifelse(.[[var]]<var.P30, "Low", ifelse(.[[var]]>var.P70, "High", "Neutral")),
Port = paste(Size, Var, sep="."))
Ret <- main.rank %>%
group_by(date, Port) %>%
summarize(ret.port = weighted.mean(.[[ret]], .[[wght]], na.rm=TRUE)) %>%
spread(Port, ret.port) %>%
mutate(Small = (Small.High + Small.Neutral + Small.Low)/3,
Big = (Big.High + Big.Neutral + Big.Low)/3,
SMB = Small - Big,
High = (Small.High + Big.High)/2,
Low = (Small.Low + Big.Low)/2,
HML = High - Low)
return(Ret)
}
Form_FF4Ports <- function(dt) {
dt.cln <- dt %>%
group_by(permno) %>%
mutate(lag.ret.12t2 = lag(ret.12t2, 1))
output <- dt.cln %>%
group_by(date) %>%
summarize(MyMkt = weighted.mean(retadj.1mn, w=port.weight, na.rm=TRUE)) %>%
as.data.frame %>%
merge(Form_CharSizePorts2(dt.cln, "lag.ME.Jun", "lag.BM.FF", "port.weight", "retadj.1mn"),
by="date", all.x=TRUE) %>%
transmute(date, MyMkt, MySMB=SMB, MySMBS=Small, MySMBB=Big, MyHML=HML, MyHMLH=High, MyHMLL=Low) %>%
merge(Form_CharSizePorts2(dt.cln, "lag.ME.Jun", "lag.ret.12t2", "port.weight", "retadj.1mn"),
by="date", all.x=TRUE) %>%
transmute(date, MyMkt, MySMB, MySMBS, MySMBB, MyHML, MyHMLH, MyHMLL, MyUMD=HML, MyUMDU=High, MyUMDD=Low)
return(output)
}
dt.myFF4.m <- Form_FF4Ports(data.both.FF.m)
Part of my data is as below:
date permno shrcd exchcd cfacpr cfacshr shrout prc vol retx retadj.1mn me port.weight datadate
1 Dec 1925 10006 10 1 7.412625 7.260000 600 109.00 NA NA NA 65.40000 NA <NA>
2 Dec 1925 10022 10 1 9.365437 9.365437 200 56.00 NA NA NA 11.20000 NA <NA>
3 Dec 1925 10030 10 1 9.969793 9.155520 156 150.00 NA NA NA 23.40000 NA <NA>
4 Dec 1925 10057 11 1 4.000000 4.000000 500 12.25 NA NA NA 6.12500 NA <NA>
5 Dec 1925 10073 10 1 0.200000 0.200000 138 17.50 NA NA NA 2.41500 NA <NA>
6 Dec 1925 10081 10 1 1.000000 1.000000 1192 9.00 NA NA NA 10.72800 NA <NA>
7 Dec 1925 10102 10 1 18.137865 18.000000 201 109.75 NA NA NA 22.05975 NA <NA>
8 Dec 1925 10110 10 1 1.010000 1.000000 500 10.50 NA NA NA 5.25000 NA <NA>
9 Dec 1925 10129 10 1 1.000000 1.000000 270 -132.00 NA NA NA 35.64000 NA <NA>
10 Dec 1925 10137 11 1 21.842743 20.920870 613 71.75 NA NA NA 43.98275 NA <NA>
comp.count at revt ib dvc BE OpProf GrProf Cflow Inv AstChg Davis.bkeq d.shares ret.12t2 ME.Dec ME.Jun BM.FF OpIB
1 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
2 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
3 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
4 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
5 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
6 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
7 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
8 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
9 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
10 NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA NA
GrIA CFP.FF BM.m CFP.m lag.ME.Jun lag.BM.FF lag.OpIB lag.AstChg
1 NA NA NA NA NA NA NA NA
2 NA NA NA NA NA NA NA NA
3 NA NA NA NA NA NA NA NA
4 NA NA NA NA NA NA NA NA
5 NA NA NA NA NA NA NA NA
6 NA NA NA NA NA NA NA NA
7 NA NA NA NA NA NA NA NA
8 NA NA NA NA NA NA NA NA
9 NA NA NA NA NA NA NA NA
10 NA NA NA NA NA NA NA NA
When I run the rode, I got the error message Error in eval(expr, envir, enclos) : object 'lag.ME.Jun' not found.
I guess the reason could be that I used the eval(parse(text = )) function here, and the environment is not set up correctly. However, other than this function, I am not sure which approach I should use when creating a universal purpose function suitable for data with different column names.
Specifically, I would like to know how I can use my function for different data frames without having to change the column names before I use them in my function.
Your issue is discussed, and solved, in the 'Programming with dplyr' vignette.
The bottomline is instead of quoting lag.ME.Jun yourself by referring to it using "lag.ME.Jun", you should rely on enquo(lag.ME.Jun) and !!lag.ME.Jun. However, this would mean that it should be in the function call.
Your function at several other points also refers to variables that are not created in the function environment (e.g. exchcd, date), so R will currently throw errors on any dataset that does not contain these variables. In general, it is unwise for functions to rely on inputs that were not part of the function call.

Multi-variable scatterplot in R

I have a data set with 12 categorical variables (I'm trying to graph measurements by month) and varying numbers of observations for each variable. A simple scatterplot isn't working and I can't figure out any other code that will let me plot my data.
Data:
June July Aug Sept Oct Nov Dec Jan Feb March April May
1 0.2315 0.8933 0.5255 0.7978 0.7581 NA 0.6729 0.7831 0.3599 0.3233 0.4901 0.2591
2 0.3030 1.0136 0.3487 1.0823 0.7639 NA 0.9925 0.7843 0.3438 0.3374 0.5919 0.2978
3 0.3850 0.5880 0.7762 0.5351 0.4139 NA 0.2131 0.7609 0.5401 0.5158 0.4427 NA
4 0.2278 0.5880 0.2531 0.6209 1.0323 NA 0.3389 0.7743 0.6315 0.5639 0.0677 NA
5 0.2374 0.7155 0.2701 0.5842 0.2598 NA NA 0.9977 1.0104 NA 0.3364 NA
6 0.2390 0.7418 0.2974 0.7259 0.2544 NA NA 0.6019 0.4063 NA 0.5175 NA
7 0.3298 0.4235 NA 0.8536 0.3954 NA NA 0.7475 0.4842 NA 0.5094 NA
8 0.3861 0.3711 NA 0.8663 0.2405 NA NA 0.7484 NA NA 0.6044 NA
9 0.2012 0.5342 NA 1.1538 1.3359 NA NA 0.3515 NA NA 0.5918 NA
10 0.2791 0.4916 NA 0.7292 1.6786 NA NA 0.3555 NA NA NA NA
11 NA 0.4715 NA 0.2570 0.6435 NA NA 0.5495 NA NA NA NA
12 NA 0.2511 NA 0.1893 0.3038 NA NA 0.3295 NA NA NA NA
13 NA 0.2627 NA 0.7264 0.2344 NA NA 0.6638 NA NA NA NA
14 NA 0.2993 NA 0.9806 0.8943 NA NA 0.7682 NA NA NA NA
15 NA 0.3847 NA 0.2676 NA NA NA 1.4591 NA NA NA NA
16 NA 0.3048 NA 1.6670 NA NA NA 0.5868 NA NA NA NA
17 NA 0.3961 NA 1.8325 NA NA NA 0.8361 NA NA NA NA
18 NA NA NA NA NA NA NA 0.8399 NA NA NA NA
Assuming your original data is called df
library(reshape2); library(ggplot2)
df <- melt(df, NULL)
ggplot(df, aes(x = rep(1:18,12), y = value, group = variable)) + geom_line(aes(col = variable))

Rearrange row order of a matrix in r

I have the following data :
as.integer(datIn$Measurement.location)
myfunctionSD <- function(mydata) { return(sd(mydata,na.rm=TRUE))}
Alltubes <- tapply(datIn$Material.loss.interval,list(as.factor(datIn$Measurement.location),as.factor(datIn$Tube.number)),myfunctionSD)
From this I get the following output table:
1 2 3 4 5 6
1 0.8710740 0.7269928 0.8151022 0.6397234 0.8670634 0.7042107
10 NA 0.8075675 NA NA NA NA
11 0.6977951 NA 1.0984465 1.1148588 1.2156506 0.9620030
12 NA 0.5986758 NA NA NA NA
13 0.8386249 NA 0.8398164 0.8833184 1.2469221 1.0070322
14 NA 0.5109903 NA NA NA NA
15 NA NA NA 0.9391486 1.3571094 0.8375686
16 NA 0.5761583 NA NA NA NA
17 NA NA NA NA 1.0100850 0.7171070
19 NA NA NA NA 0.5913518 NA
3 0.5580579 0.6106961 0.8971073 0.7046614 0.8456784 0.8001571
4 NA 0.7228325 NA NA NA NA
5 0.9318795 NA 0.8961706 0.7753733 0.5915633 1.0471933
6 NA 0.5968613 NA NA NA NA
7 0.7674944 NA 0.7196781 0.8543926 0.7778685 0.8697442
8 NA 0.6283008 NA NA NA NA
9 1.3687895 NA 0.8815196 1.1723445 1.1589998 0.8194962
How do I rearrange the row numbers in the correct numeric order, so from 1 to 19 so I can plot it correctly?
Hope someone can help me.
Something like this...
> Alltubes[sort(as.numeric(rownames(Alltubes))), ]
df2 is your data frame
df2[order(as.numeric(rownames(df2))),]
X1 X2 X3 X4 X5 X6
1 0.8710740 0.7269928 0.8151022 0.6397234 0.8670634 0.7042107
3 0.5580579 0.6106961 0.8971073 0.7046614 0.8456784 0.8001571
4 NA 0.7228325 NA NA NA NA
5 0.9318795 NA 0.8961706 0.7753733 0.5915633 1.0471933
6 NA 0.5968613 NA NA NA NA
7 0.7674944 NA 0.7196781 0.8543926 0.7778685 0.8697442
8 NA 0.6283008 NA NA NA NA
9 1.3687895 NA 0.8815196 1.1723445 1.1589998 0.8194962
10 NA 0.8075675 NA NA NA NA
11 0.6977951 NA 1.0984465 1.1148588 1.2156506 0.9620030
12 NA 0.5986758 NA NA NA NA
13 0.8386249 NA 0.8398164 0.8833184 1.2469221 1.0070322
14 NA 0.5109903 NA NA NA NA
15 NA NA NA 0.9391486 1.3571094 0.8375686
16 NA 0.5761583 NA NA NA NA
17 NA NA NA NA 1.0100850 0.7171070
19 NA NA NA NA 0.5913518 NA

Resources