I created these lines (function) to modify a specific column of a data frame, I want to use this function to run it for different column and data frame, but the function does not work, I got a error code message.
change.date <- function(df_date,col_nb,first.year, second.year){
df_date$col_nb <- gsub(first.year, second.year, df_date$col_nb)
df_date$col_nb <- as.Date(df_date$col_nb)
df_date$col_nb <- as.numeric(df_date$col_nb)
}
change.date(df_2020,df_2020[1], "2020","2020")
Error in $<-.data.frame`(*tmp*`, "col_nb", value = character(0)):
replacement table has 0 rows, replaced table has 7265
my reproducible data are:
df_2020 <- dput(test_qst)
structure(list(Date = structure(c(1588809600, 1588809600, 1588809600,
1588809600, 1588809600, 1588809600, 1588809600, 1588809600, 1588809600,
1588809600, 1588809600, 1588809600, 1588809600, 1588809600), class = c("POSIXct",
"POSIXt"), tzone = "UTC"), Depth = c(1.72, 3.07, 3.65, 4.58,
5.39, 6.31, 7.27, 8.57, 9.73, 10.78, 11.71, 12.81, 13.79, 14.96
), salinity = c(34.7299999999999, 34.79, 34.76, 34.78, 34.77,
34.79, 34.76, 34.71, 34.78, 34.78, 34.7999999999999, 34.86, 34.7999999999999,
34.83)), class = c("tbl_df", "tbl", "data.frame"), row.names = c(NA,
-14L))
You may try
change.date <- function(df_date,col_nb,first.year, second.year){
df_date[[col_nb]] <- gsub(first.year, second.year, df_date[[col_nb]])
df_date[[col_nb]] <- as.Date(df_date[[col_nb]])
df_date[[col_nb]] <- as.numeric(df_date[[col_nb]])
df_date
}
change.date(df_2020, "Date", "2020","2020")
Date Depth salinity
<dbl> <dbl> <dbl>
1 18389 1.72 34.7
2 18389 3.07 34.8
3 18389 3.65 34.8
4 18389 4.58 34.8
5 18389 5.39 34.8
6 18389 6.31 34.8
7 18389 7.27 34.8
8 18389 8.57 34.7
9 18389 9.73 34.8
10 18389 10.8 34.8
11 18389 11.7 34.8
12 18389 12.8 34.9
13 18389 13.8 34.8
14 18389 15.0 34.8
One issue you may find when using gsub is that you lose the dates. Unless you need a numerical timescale, then it may be better to keep dates for plotting and analysis.
Using dplyr, this extracts the years, changes them, and then creates dates again, (even if they are the same year):
library(dplyr)
change.date <- function(df_date, col_nb = "Date", first.year, second.year) {
col_nb <- which(colnames(df_date) %in% col_nb)
df_date %>%
mutate(year = lubridate::year(.[[col_nb]])) %>%
mutate(year = ifelse(year == first.year, second.year, year)) %>%
mutate(Date = lubridate::make_date(year, lubridate::month(.[[col_nb]]), lubridate::day(.[[col_nb]]))) %>%
select(-year)
}
change.date(df_2020, "Date", 2020, 2020)
# A tibble: 14 x 3
Date Depth salinity
<date> <dbl> <dbl>
1 2020-05-07 1.72 34.7
2 2020-05-07 3.07 34.8
3 2020-05-07 3.65 34.8
4 2020-05-07 4.58 34.8
5 2020-05-07 5.39 34.8
6 2020-05-07 6.31 34.8
7 2020-05-07 7.27 34.8
8 2020-05-07 8.57 34.7
9 2020-05-07 9.73 34.8
10 2020-05-07 10.8 34.8
11 2020-05-07 11.7 34.8
12 2020-05-07 12.8 34.9
13 2020-05-07 13.8 34.8
14 2020-05-07 15.0 34.8
If you do want numerical dates, then use this instead of the second last line:
mutate(Date = as.numeric(lubridate::make_date(year, lubridate::month(.[[col_nb]]), lubridate::day(.[[col_nb]])))) %>%
One comment on your function is to be consistent on the case. Camel case, snake case or, less so, dot case are all acceptable, but using a combination makes it harder to keep track of variables, e.g. df_date versus first.year.
Related
I'm wondering if there is a simple way to filter time with a date-time POSIXct variable.
I discovered non-equal filtering with time variable (hms) is straightforward:
> apple_data
# A tibble: 10 × 6
SYMBOL DATE TIME BB BO date_time
<chr> <date> <time> <dbl> <dbl> <dttm>
1 AAPL 2009-01-02 09:30:00 85.6 85.6 2009-01-02 09:30:00
2 AAPL 2009-01-02 09:30:01 85.6 85.9 2009-01-02 09:30:01
3 AAPL 2009-01-02 09:30:02 85.6 85.7 2009-01-02 09:30:02
4 AAPL 2009-01-02 09:30:03 85.6 85.7 2009-01-02 09:30:03
5 AAPL 2009-01-02 09:30:04 85.6 85.8 2009-01-02 09:30:04
6 AAPL 2009-01-02 09:30:05 85.6 85.7 2009-01-02 09:30:05
7 AAPL 2009-01-02 09:30:06 85.6 85.7 2009-01-02 09:30:06
8 AAPL 2009-01-02 09:30:07 85.6 85.7 2009-01-02 09:30:07
9 AAPL 2009-01-02 09:30:08 85.6 85.7 2009-01-02 09:30:08
10 AAPL 2009-01-02 09:30:09 85.6 85.7 2009-01-02 09:30:09
apple_data %>% filter(TIME <= as_hms("09:30:05"), TIME >= as_hms("09:30:03"))
# A tibble: 3 × 6
SYMBOL DATE TIME BB BO date_time
<chr> <date> <time> <dbl> <dbl> <dttm>
1 AAPL 2009-01-02 09:30:03 85.6 85.7 2009-01-02 09:30:03
2 AAPL 2009-01-02 09:30:04 85.6 85.8 2009-01-02 09:30:04
3 AAPL 2009-01-02 09:30:05 85.6 85.7 2009-01-02 09:30:05
Question 1
If I do not have DATE and TIME variables but date_time only instead, which is POSIXct, how could I perform non-equi filtering only with time?
Question 2
I tried extracting TIME from date_time using format(date_time, "%T"), and discovered time filtering can be done even though the output is a string. However, it takes too much time to convert string to hms on big data, and I need it for merging with other data.
Is there a fast way to convert string to hms, or extract hms from date_time from the beginning so that I can skip this costly type conversion? Any suggestions are greatly appreciated.
Reprex
structure(list(SYMBOL = structure(c("AAPL", "AAPL", "AAPL", "AAPL",
"AAPL", "AAPL", "AAPL", "AAPL", "AAPL", "AAPL"), label = "Stock Symbol"),
DATE = structure(c(14246, 14246, 14246, 14246, 14246, 14246,
14246, 14246, 14246, 14246), label = "Quote date", format.sas = "YYMMDDN8", class = "Date"),
TIME = structure(c(34200, 34201, 34202, 34203, 34204, 34205,
34206, 34207, 34208, 34209), class = c("hms", "difftime"), units = "secs"),
BB = structure(c(85.55, 85.6, 85.56, 85.55, 85.57, 85.56,
85.61, 85.61, 85.62, 85.62), label = "Best Bid"), BO = structure(c(85.6,
85.86, 85.66, 85.66, 85.8, 85.66, 85.66, 85.66, 85.73, 85.73
), label = "Best Offer"), date_time = structure(c(1230888600,
1230888601, 1230888602, 1230888603, 1230888604, 1230888605,
1230888606, 1230888607, 1230888608, 1230888609), tzone = "UTC", format.sas = "DATETIME20", class = c("POSIXct",
"POSIXt"))), row.names = c(NA, -10L), class = c("tbl_df",
"tbl", "data.frame"))
1) Calculate the range of seconds, rng, in the comparison times and also in the time of date_time. That avoids character processing for date_time. Note that 86400 equals 24 * 60 * 60.
library(dplyr, exclude = c("filter", "lag"))
rng <- as.difftime(c("09:30:03", "09:30:05"), unit = "secs")
apple_data %>%
dplyr::filter(between(as.numeric(date_time) %% 86400, !!!rng))
giving:
# A tibble: 3 × 6
SYMBOL DATE TIME BB BO date_time
<chr> <date> <hms> <dbl> <dbl> <dttm>
1 AAPL 2009-01-02 34203 secs 85.6 85.7 2009-01-02 09:30:03
2 AAPL 2009-01-02 34204 secs 85.6 85.8 2009-01-02 09:30:04
3 AAPL 2009-01-02 34205 secs 85.6 85.7 2009-01-02 09:30:05
2) A base R version of the above is nearly the same.
Between <- function(x, ..., rng = range(c(...))) x >= rng[1] & x <= rng[2]
rng <- as.difftime(c("09:30:03", "09:30:05"), unit = "secs")
apple_data |>
subset(Between(as.numeric(date_time) %% 86400, rng))
Year A B C D E F
1993-Q1 15.3 5.77 437.02 487.68 97 86.9
1993-Q2 13.5 5.74 455.2 504.5 94.7 85.4
1993-Q3 12.9 5.79 469.42 523.37 92.4 82.9
:::
2021-Q1 18.3 6.48 35680.82 29495.92 182.2 220.4
2021-Q2 7.9 6.46 36940.3 30562.03 180.4 218
Dataset1 <- read.csv('C:/Users/s/Desktop/R/intro/data/Dataset1.csv')
class(Dataset1)
[1] "data.frame"
time_series <- ts(Dataset1, start=1993, frequency = 4)
class(time_series)
[1] "mts" "ts" "matrix"
I don't know how to proceed from there to read my Year column as dates (quaterly) instead of numbers!
Date class does not work well with ts class. It is better to use year and quarter. Using the input shown reproducibly in the Note at the end use read.csv.zoo with yearqtr class and then convert it to ts. The strip.white is probably not needed but we added it just in case.
library(zoo)
z <- read.csv.zoo("Dataset1.csv", FUN = as.yearqtr, format = "%Y-Q%q",
strip.white = TRUE)
tt <- as.ts(z)
tt
## A B C D E F
## 1993 Q1 15.3 5.77 437.02 487.68 97.0 86.9
## 1993 Q2 13.5 5.74 455.20 504.50 94.7 85.4
## 1993 Q3 12.9 5.79 469.42 523.37 92.4 82.9
class(tt)
## [1] "mts" "ts" "matrix"
as.integer(time(tt)) # years
## [1] 1993 1993 1993
cycle(tt) # quarters
## Qtr1 Qtr2 Qtr3
## 1993 1 2 3
as.numeric(time(tt)) # time in years
## [1] 1993.00 1993.25 1993.50
If you did want to use Date class it would be better to use a zoo (or xts) series.
zd <- aggregate(z, as.Date, c)
zd
## A B C D E F
## 1993-01-01 15.3 5.77 437.02 487.68 97.0 86.9
## 1993-04-01 13.5 5.74 455.20 504.50 94.7 85.4
## 1993-07-01 12.9 5.79 469.42 523.37 92.4 82.9
If you want a data frame or xts object then fortify.zoo(z), fortify.zoo(zd), as.xts(z) or as.xts(zd) can be used depending on which one you want.
Note
Lines <- "Year,A,B,C,D,E,F
1993-Q1,15.3,5.77,437.02,487.68,97,86.9
1993-Q2,13.5,5.74,455.2,504.5,94.7,85.4
1993-Q3,12.9,5.79,469.42,523.37,92.4,82.9
"
cat(Lines, file = "Dataset1.csv")
lubridate has really nice year-quarter function yq to convert year quarters to dates.
Dataset1<-structure(list(Year = c("1993-Q1", "1993-Q2", "1993-Q3", "1993-Q4", "1994-Q1", "1994-Q2"), ChinaGDP = c(15.3, 13.5, 12.9, 14.1, 14.1, 13.3), Yuan = c(5.77, 5.74, 5.79, 5.81, 8.72, 8.7), totalcredit = c(437.02, 455.2, 469.42, 521.68, 363.42, 389.01), bankcredit = c(487.68, 504.5, 523.37, 581.83, 403.48, 431.06), creditpercGDP = c(97, 94.7, 92.4, 95.6, 91.9, 90), creditGDPratio = c(86.9, 85.4, 82.9, 85.7, 82.8, 81.2)), row.names = c(NA, 6L), class = "data.frame")
library(lubridate)
library(dplyr)
df_quarter <- Dataset1 %>%
mutate(date=yq(Year)) %>%
relocate(date, .after=Year)
df_quarter
#> Year date ChinaGDP Yuan totalcredit bankcredit creditpercGDP
#> 1 1993-Q1 1993-01-01 15.3 5.77 437.02 487.68 97.0
#> 2 1993-Q2 1993-04-01 13.5 5.74 455.20 504.50 94.7
#> 3 1993-Q3 1993-07-01 12.9 5.79 469.42 523.37 92.4
#> 4 1993-Q4 1993-10-01 14.1 5.81 521.68 581.83 95.6
#> 5 1994-Q1 1994-01-01 14.1 8.72 363.42 403.48 91.9
#> 6 1994-Q2 1994-04-01 13.3 8.70 389.01 431.06 90.0
#> creditGDPratio
#> 1 86.9
#> 2 85.4
#> 3 82.9
#> 4 85.7
#> 5 82.8
#> 6 81.2
Created on 2022-01-15 by the reprex package (v2.0.1)
Here mydata
zem=zem = read.table(
V1
75
74,7
74,4
74,1
73,8
75,5
73,3
73,1
72,9
73
72,8
72,3
72,1
71,9
71,7
71,6
71,3
71,4
71,3
71,2
71,1
70
69,5
69
68,5)
I want detect anomaly value. So i decided use library(anomalize).
Code below
library(anomalize) #tidy anomaly detectiom
library(tidyverse) #tidyverse packages like dplyr, ggplot, tidyr
zem %>%
time_decompose(V1) %>%
anomalize(remainder) %>%
time_recompose() %>%
filter(anomaly == 'Yes')
an i get the error
Error: Error time_decompose(): Object is not of class tbl_df or tbl_time.
What's wrong?
How can i get desired result?
V1 Anomaly
1 75.0 no
2 74.7 no
3 74.4 no
4 74.1 no
5 73.8 no
6 75.5 yes
7 73.3 no
8 73.1 no
9 72.9 no
10 73.0 no
11 72.8 no
12 72.3 no
13 72.1 no
14 71.9 no
15 71.7 no
16 71.6 no
17 71.3 no
18 71.4 no
19 71.3 no
20 71.2 no
21 71.1 no
22 70.0 no
23 69.5 no
24 69.0 no
25 68.5 no
i just tried modify this code for my task
https://towardsdatascience.com/tidy-anomaly-detection-using-r-82a0c776d523
The time_decompose() function requires data in the form of:
A tibble or tbl_time object
(from ?time_decompose)
Perhaps zem is a data.frame? You can include as_tibble() in the pipe to make sure it is a tibble ahead of time.
In addition, it expects to work on time based data:
It is designed to work with time-based data, and as such must have a
column that contains date or datetime information.
I added to your test data a column with dates. Here is a working example:
library(anomalize)
library(tidyverse)
zem$date <- as.Date(Sys.Date() + 1:nrow(zem))
zem %>%
as_tibble() %>%
time_decompose(V1) %>%
anomalize(remainder) %>%
time_recompose() %>%
filter(anomaly == 'Yes')
Output
Converting from tbl_df to tbl_time.
Auto-index message: index = date
frequency = 7 days
trend = 12.5 days
# A time tibble: 4 x 10
# Index: date
date observed season trend remainder remainder_l1 remainder_l2 anomaly recomposed_l1 recomposed_l2
<date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <chr> <dbl> <dbl>
1 2020-12-28 75.5 0.782 73.8 0.934 -0.555 0.624 Yes 74.0 75.2
2 2021-01-04 72.1 0.782 72.3 -0.996 -0.555 0.624 Yes 72.5 73.7
3 2021-01-10 71.3 -0.229 70.7 0.789 -0.555 0.624 Yes 70.0 71.1
4 2021-01-12 71.1 -0.220 70.1 1.24 -0.555 0.624 Yes 69.3 70.5
Here is a visual of anomalies detected:
zem %>%
as_tibble() %>%
time_decompose(V1) %>%
anomalize(remainder) %>%
plot_anomaly_decomposition()
Plot
Data
zem <- structure(list(V1 = c(75, 74.7, 74.4, 74.1, 73.8, 75.5, 73.3,
73.1, 72.9, 73, 72.8, 72.3, 72.1, 71.9, 71.7, 71.6, 71.3, 71.4,
71.3, 71.2, 71.1, 70, 69.5, 69, 68.5), date = structure(c(18619,
18620, 18621, 18622, 18623, 18624, 18625, 18626, 18627, 18628,
18629, 18630, 18631, 18632, 18633, 18634, 18635, 18636, 18637,
18638, 18639, 18640, 18641, 18642, 18643), class = "Date")), row.names = c(NA,
-25L), class = "data.frame")
I have the following dataset of weather conditions in 5 different sites observed in 15-minute intervals over a year, and am developing a shiny app based on it.
site_id date_time latitude longitude ambient_air_tem~ relative_humidy barometric_pres~ average_wind_sp~ particulate_den~
<chr> <dttm> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 arc1046 2019-11-15 09:15:00 -37.8 145. 14.4 65.4 1007. 7.45 3.9
2 arc1048 2019-11-15 09:15:00 -37.8 145. 14.0 65.5 1006. 6.95 4.4
3 arc1045 2019-11-15 09:15:00 -37.8 145. 14.8 60 1007. 4.93 3.9
4 arc1047 2019-11-15 09:15:00 -37.8 145. 14.4 66.1 1008. 7.85 4.5
5 arc1050 2019-11-15 09:15:00 -37.8 145. 14.1 64.7 1007. 5.8 3.9
6 arc1045 2019-11-15 09:30:00 -37.8 145. 15.4 57.1 1007. 4.43 3.8
7 arc1046 2019-11-15 09:30:00 -37.8 145. 14.8 63.2 1007. 7.6 4.5
8 arc1047 2019-11-15 09:30:00 -37.8 145. 15.2 62.7 1008 7.13 3.6
9 arc1048 2019-11-15 09:30:00 -37.8 145. 14.6 62.2 1007. 7.09 4.7
10 arc1050 2019-11-15 09:30:00 -37.8 145. 14.6 62.5 1007 5.94 3.5
I mapped the 5 sites using leaflet.
leaflet(quarter_hour_readings) %>%
addTiles() %>%
addCircleMarkers(
layerId = ~site_id,
label = ~site_id)
And now want to include radial(spider) plots on each of the markers on the map, upon selecting a single date. For now I have filtered out the data values at a single date, for the following radial plot.
library(fmsb)
dat <- rbind(c(85.00,100.00,2000.00,160.00,999.9,1999.9),
c(-40.00,0.00,10.00,0.00,0.00,0.00),
quarter_hour_readings %>%
filter(date_time == as.POSIXct("2019-11-15 09:15:00",tz="UTC")) %>%
column_to_rownames(var="site_id") %>%
select(c("ambient_air_temperature","relative_humidy","barometric_pressure", "average_wind_speed", "particulate_density_2.5", "particulate_density_10")))
radarchart(dat)
I am however unsure how to include these raidal plots on the respective markers on the map and if there was an easier way to handle this. Although I found this package to insert minicharts on leaflet maps, I wasn't able to find how to add radar plots on a map.
Note. Since you did not provide a reproducible dataset, I take some fake data.
You can follow the approach described here:
m <- leaflet() %>% addTiles()
rand_lng <- function(n = 5) rnorm(n, -93.65, .01)
rand_lat <- function(n = 5) rnorm(n, 42.0285, .01)
rdr_dat <- structure(list(total = c(5, 1, 2.15031008049846, 4.15322054177523,
2.6359076872468),
phys = c(15, 3, 12.3804132539814, 6.6208886719424,
12.4789917719968),
psycho = c(3, 0, 0.5, NA, 3),
social = c(5, 1, 2.82645894121379,
4.82733338139951, 2.81333662476391),
env = c(5, 1, 5, 2.5, 4)),
row.names = c(NA, -5L), class = "data.frame")
makePlotURI <- function(expr, width, height, ...) {
pngFile <- plotPNG(function() { expr }, width = width, height = height, ...)
on.exit(unlink(pngFile))
base64 <- httpuv::rawToBase64(readBin(pngFile, raw(1), file.size(pngFile)))
paste0("data:image/png;base64,", base64)
}
set.seed(1)
plots <- data.frame(lat = rand_lat(),
lng = rand_lng(),
radar = rep(makePlotURI({radarchart(rdr_dat)}, 200, 200, bg = "white"), 5))
m %>% addMarkers(icon = ~ icons(radar), data = plots)
I like to use the advanted of BatchgetSymbols.
Any advice how I can best manipulate the output to receive the format below?
symbols_RP <- c('VDNR.L','VEUD.L','VDEM.L','IDTL.L','IEMB.L','GLRE.L','IGLN.L')
#Setting price download date range
from_date <- as.Date('2019-01-01')
to_date <- as.Date(Sys.Date())
get.symbol.adjclose <- function(ticker) {
l.out <- BatchGetSymbols(symbols_RP, first.date = from_date, last.date = to_date, do.cache=TRUE, freq.data = "daily", do.complete.data = TRUE, do.fill.missing.prices = TRUE, be.quiet = FALSE)
return(l.out$df.tickers)
}
prices <- get.symbol.adjclose(symbols_RP)
Output Batchgetsymbols
$df.tickers
price.open price.high price.low price.close volume price.adjusted ref.date ticker ret.adjusted.prices ret.closing.prices
1 60.6000 61.7950 60.4000 61.5475 4717 60.59111 2019-01-02 VDNR.L NA NA
2 60.7200 60.9000 60.5500 60.6650 22015 59.72233 2019-01-03 VDNR.L -1.433838e-02 -1.433852e-02
3 60.9050 60.9500 60.9050 61.8875 1010 60.92583 2019-01-04 VDNR.L 2.015164e-02 2.015165e-02
4 62.3450 62.7850 62.3400 62.7300 820 61.75524 2019-01-07 VDNR.L 1.361339e-02 1.361340e-02
Desired output below:
VTI PUTW VEA VWO TLT VNQI GLD EMB UST FTAL
2019-01-02 124.6962 25.18981 35.72355 36.92347 118.6449 48.25209 121.33 97.70655 55.18464 45.76
2019-01-03 121.8065 25.05184 35.43429 36.34457 119.9950 48.32627 122.43 98.12026 56.01122 45.54
2019-01-04 125.8384 25.39677 36.52383 37.49271 118.6061 49.38329 121.44 98.86311 55.10592 46.63
2019-01-07 127.1075 25.57416 36.63954 37.56989 118.2564 49.67072 121.86 99.28625 54.81071 46.54
2019-01-08 128.4157 25.61358 36.89987 37.78215 117.9456 50.06015 121.53 99.21103 54.54502 47.05
2019-01-09 129.0210 25.56431 37.35305 38.33209 117.7610 50.39395 122.31 99.38966 54.56470 47.29
as I know from other languages, I could use for loop, but I know there is faster ways in r.
Maybe one could hint me the r-way?
Improved version:
get.symbol.adjclose <- function(ticker) {
l.out <- BatchGetSymbols(symbols_RP, first.date = from_date, last.date = to_date, do.cache=TRUE, freq.data = "daily", do.complete.data = TRUE, do.fill.missing.prices = TRUE, be.quiet = FALSE)
return(as.data.frame(l.out$df.tickers[c("ticker","ref.date","price.open","price.high","price.low","price.close","volume","price.adjusted")]))
}
Using dplyr and tidyr. I'm selecting price.adjusted, but you can use any of the prices you need.
library(dplyr)
library(tidyr)
prices %>%
select(ref.date, ticker, price.adjusted) %>% # select columns before pivot_wider
pivot_wider(names_from = ticker, values_from = price.adjusted)
# A tibble: 352 x 7
ref.date GLRE.L IDTL.L IGLN.L VDEM.L VDNR.L VEUD.L
<date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 2019-01-02 NA NA 25.2 51.0 60.6 30.2
2 2019-01-03 32.2 4.50 25.3 50.3 59.7 30.1
3 2019-01-04 32.6 4.47 25.2 51.7 60.9 30.9
4 2019-01-07 32.8 4.47 25.3 51.8 61.8 31.0
5 2019-01-08 32.8 4.44 25.2 51.9 62.0 31.3
6 2019-01-09 33.3 4.43 25.3 53.0 62.7 31.7
7 2019-01-10 33.5 4.41 25.3 53.2 62.7 31.7
8 2019-01-11 33.8 4.40 25.3 53.1 62.8 31.6
9 2019-01-14 33.8 4.41 25.3 52.7 62.7 31.4
10 2019-01-15 34.0 4.41 25.3 53.1 63.1 31.4
# ... with 342 more rows
Note from BatchGetSymbols :
IEMB.L OUT: not enough data (thresh.bad.data = 75%)