Forecasting irregular stock data with ARIMA and tsibble - r

I want to forecast a certain stock using ARIMA in a similar way that R. Hyndman does it in FPP3.
The first issue that I've run into is that stock data is obviously irregular, since the stock exchange is closed during weekends and some holidays. This creates some issues if I want to use functions from the tidyverts packages:
> stock
# A tsibble: 750 x 6 [1D]
Date Open High Low Close Volume
<date> <dbl> <dbl> <dbl> <dbl> <dbl>
1 2019-05-21 36.3 36.4 36.3 36.4 232
2 2019-05-22 36.4 37.0 36.4 36.8 1007
3 2019-05-23 36.7 36.8 36.1 36.1 4298
4 2019-05-24 36.4 36.5 36.4 36.4 452
5 2019-05-27 36.5 36.5 36.3 36.4 2032
6 2019-05-28 36.5 36.8 36.4 36.5 3049
7 2019-05-29 36.2 36.5 36.1 36.5 2962
8 2019-05-30 36.8 37.1 36.8 37.1 432
9 2019-05-31 36.8 37.4 36.8 37.4 8424
10 2019-06-03 37.3 37.5 37.2 37.3 1550
# ... with 740 more rows
> stock %>%
+ feasts::ACF(difference(Close)) %>%
+ autoplot()
Error in `check_gaps()`:
! .data contains implicit gaps in time. You should check your data and convert implicit gaps into explicit missing values using `tsibble::fill_gaps()` if required.
The same error regarding gaps in time applies to other functions like fable::ARIMA() or feasts::gg_tsdisplay().
I have tried filling the gaps with values from previous rows:
stock %>%
group_by_key() %>%
fill_gaps() %>%
tidyr::fill(Close, .direction = "down")
# A tsibble: 1,096 x 6 [1D]
Date Open High Low Close Volume
<date> <dbl> <dbl> <dbl> <dbl> <dbl>
1 2019-05-21 36.3 36.4 36.3 36.4 232
2 2019-05-22 36.4 37.0 36.4 36.8 1007
3 2019-05-23 36.7 36.8 36.1 36.1 4298
4 2019-05-24 36.4 36.5 36.4 36.4 452
5 2019-05-25 NA NA NA 36.4 NA
6 2019-05-26 NA NA NA 36.4 NA
7 2019-05-27 36.5 36.5 36.3 36.4 2032
8 2019-05-28 36.5 36.8 36.4 36.5 3049
9 2019-05-29 36.2 36.5 36.1 36.5 2962
10 2019-05-30 36.8 37.1 36.8 37.1 432
# ... with 1,086 more rows
and everything works as it should from there. My question is:
Is there a way to use the "tidyverts approach" without running into the issue regarding gaps in time?
If not, is filling the gaps with values from previous rows a correct way to overcome this or will it bias the model?

First, you're clearly using an old version of the feasts package, because the current version gives a warning rather than an error when computing the ACF from data with implicit gaps.
Second, the answer depends on what analysis you want to do. You have three choices:
use day as the time index and fill the gaps with NAs;
use day as the time index and fill the gaps with the previous closing stock prices;
use trading day as the time index, in which case there are no gaps.
Here are the results for each of them, using an example of Apple stock over the period 2014-2018.
library(fpp3)
#> ── Attaching packages ─────────────────────────────────────── fpp3 0.4.0.9000 ──
#> ✔ tibble 3.1.7 ✔ tsibble 1.1.1
#> ✔ dplyr 1.0.9 ✔ tsibbledata 0.4.0
#> ✔ tidyr 1.2.0 ✔ feasts 0.2.2
#> ✔ lubridate 1.8.0 ✔ fable 0.3.1
#> ✔ ggplot2 3.3.6 ✔ fabletools 0.3.2
#> ── Conflicts ───────────────────────────────────────────────── fpp3_conflicts ──
#> ✖ lubridate::date() masks base::date()
#> ✖ dplyr::filter() masks stats::filter()
#> ✖ tsibble::intersect() masks base::intersect()
#> ✖ tsibble::interval() masks lubridate::interval()
#> ✖ dplyr::lag() masks stats::lag()
#> ✖ tsibble::setdiff() masks base::setdiff()
#> ✖ tsibble::union() masks base::union()
1. Fill non-trading days with missing values
stock <- gafa_stock %>%
filter(Symbol == "AAPL") %>%
tsibble(index = Date, regular = TRUE) %>%
fill_gaps()
stock
#> # A tsibble: 1,825 x 8 [1D]
#> Symbol Date Open High Low Close Adj_Close Volume
#> <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 AAPL 2014-01-02 79.4 79.6 78.9 79.0 67.0 58671200
#> 2 AAPL 2014-01-03 79.0 79.1 77.2 77.3 65.5 98116900
#> 3 <NA> 2014-01-04 NA NA NA NA NA NA
#> 4 <NA> 2014-01-05 NA NA NA NA NA NA
#> 5 AAPL 2014-01-06 76.8 78.1 76.2 77.7 65.9 103152700
#> 6 AAPL 2014-01-07 77.8 78.0 76.8 77.1 65.4 79302300
#> 7 AAPL 2014-01-08 77.0 77.9 77.0 77.6 65.8 64632400
#> 8 AAPL 2014-01-09 78.1 78.1 76.5 76.6 65.0 69787200
#> 9 AAPL 2014-01-10 77.1 77.3 75.9 76.1 64.5 76244000
#> 10 <NA> 2014-01-11 NA NA NA NA NA NA
#> # … with 1,815 more rows
stock %>%
model(ARIMA(Close ~ pdq(d=1)))
#> A mable: 1 x 1
#> `ARIMA(Close ~ pdq(d = 1))`
#> <model>
#> 1 <ARIMA(0,1,0)>
In this case, calculations of the ACF will find the longest contiguous part which is too small to be meaningful, so there isn't any point showing the results of ACF() or gg_tsdisplay(). Also, the automated choice of differencing in the ARIMA model fails due to the missing values, so I have manually set it to one. The other parts of the ARIMA model work fine in the presence of missing values.
2. Fill non-trading days with the last observed values
stock <- stock %>%
tidyr::fill(Close, .direction = "down")
stock
#> # A tsibble: 1,825 x 8 [1D]
#> Symbol Date Open High Low Close Adj_Close Volume
#> <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 AAPL 2014-01-02 79.4 79.6 78.9 79.0 67.0 58671200
#> 2 AAPL 2014-01-03 79.0 79.1 77.2 77.3 65.5 98116900
#> 3 <NA> 2014-01-04 NA NA NA 77.3 NA NA
#> 4 <NA> 2014-01-05 NA NA NA 77.3 NA NA
#> 5 AAPL 2014-01-06 76.8 78.1 76.2 77.7 65.9 103152700
#> 6 AAPL 2014-01-07 77.8 78.0 76.8 77.1 65.4 79302300
#> 7 AAPL 2014-01-08 77.0 77.9 77.0 77.6 65.8 64632400
#> 8 AAPL 2014-01-09 78.1 78.1 76.5 76.6 65.0 69787200
#> 9 AAPL 2014-01-10 77.1 77.3 75.9 76.1 64.5 76244000
#> 10 <NA> 2014-01-11 NA NA NA 76.1 NA NA
#> # … with 1,815 more rows
stock %>%
ACF(difference(Close)) %>%
autoplot()
stock %>%
model(ARIMA(Close))
#> # A mable: 1 x 1
#> `ARIMA(Close)`
#> <model>
#> 1 <ARIMA(0,1,0)>
stock %>%
gg_tsdisplay(Close)
3. Re-index by trading day
stock <- gafa_stock %>%
filter(Symbol == "AAPL") %>%
tsibble(index = Date, regular = TRUE) %>%
mutate(trading_day = row_number()) %>%
tsibble(index = trading_day)
stock
#> # A tsibble: 1,258 x 9 [1]
#> Symbol Date Open High Low Close Adj_Close Volume trading_day
#> <chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <int>
#> 1 AAPL 2014-01-02 79.4 79.6 78.9 79.0 67.0 58671200 1
#> 2 AAPL 2014-01-03 79.0 79.1 77.2 77.3 65.5 98116900 2
#> 3 AAPL 2014-01-06 76.8 78.1 76.2 77.7 65.9 103152700 3
#> 4 AAPL 2014-01-07 77.8 78.0 76.8 77.1 65.4 79302300 4
#> 5 AAPL 2014-01-08 77.0 77.9 77.0 77.6 65.8 64632400 5
#> 6 AAPL 2014-01-09 78.1 78.1 76.5 76.6 65.0 69787200 6
#> 7 AAPL 2014-01-10 77.1 77.3 75.9 76.1 64.5 76244000 7
#> 8 AAPL 2014-01-13 75.7 77.5 75.7 76.5 64.9 94623200 8
#> 9 AAPL 2014-01-14 76.9 78.1 76.8 78.1 66.1 83140400 9
#> 10 AAPL 2014-01-15 79.1 80.0 78.8 79.6 67.5 97909700 10
#> # … with 1,248 more rows
stock %>%
ACF(difference(Close)) %>%
autoplot()
stock %>%
model(ARIMA(Close))
#> # A mable: 1 x 1
#> `ARIMA(Close)`
#> <model>
#> 1 <ARIMA(2,1,3)>
stock %>%
gg_tsdisplay(Close)
Created on 2022-05-22 by the reprex package (v2.0.1)

Related

heatwaveR package, ts2clm() turn temperature values into NA

I'm using heatwaveR package in R to make a plot (event_line()) and visualize the heatwaves over the years. The first step is to run ts2clm(), but this command turn my temp colum into NA so I can't plot anything. Does anyone see any errors?
This is my data:
>>> Data
t temp
[Date] [num]
0 2020-05-14 6.9
1 2020-05-06 6.8
2 2020-04-23 5.5
3 2020-04-16 3.6
4 2020-03-31 2.5
5 2020-02-25 2.3
6 2020-01-30 2.8
7 2019-10-02 13.4
8 2022-09-02 19
9 2022-08-15 18.7
...
687 1974-05-06 4.2
This is my code:
#Load data
Data <- read_xlsx("seili_raw_temp.xlsx")
#Set t as class Date
Data$t <- as.Date(Data$t, format = "%Y-%m-%d")
#Constructs seasonal and threshold climatologies
ts <- ts2clm(Data, climatologyPeriod = c("1974-05-06", "2020-05-14"))
#This is the point where almost all temp values turn into NA, so you can ignore below.
#Detect_even
res <- detect_event(ts)
#Draw heatwave plot
event_line(res, min_duration = "3",metric = "int_cum",
start_date = c("1974-05-06"), end_date = c("2020-05-14"))
The data you posted isn't long enough to get the function to work, so I just made some up:
library(heatwaveR)
library(lubridate)
set.seed(1234)
Data <- data.frame(
t = seq(ymd("2015-01-01"), ymd("2023-01-01"), by="7 day"))
Data$temp <- runif(nrow(Data), 0,45)
Then, when I execute the function, I get the result below. The problem is that your data (like the ones I generated) have one observation every 7 days. The ts2clm() function pads out the dataset so that every day has an entry and if a temperature was not observed on that day, it fills in with a missing value.
ts <- ts2clm(Data, climatologyPeriod = c("2015-01-01", "2022-12-29"))
ts
#> # A tibble: 2,920 × 5
#> doy t temp seas thresh
#> <int> <date> <dbl> <dbl> <dbl>
#> 1 1 2015-01-01 5.12 22.5 38.6
#> 2 2 2015-01-02 NA 22.4 38.5
#> 3 3 2015-01-03 NA 22.2 38.2
#> 4 4 2015-01-04 NA 22.1 37.9
#> 5 5 2015-01-05 NA 21.9 37.3
#> 6 6 2015-01-06 NA 21.7 36.8
#> 7 7 2015-01-07 NA 21.5 36.5
#> 8 8 2015-01-08 28.0 21.3 36.1
#> 9 9 2015-01-09 NA 21.2 36.1
#> 10 10 2015-01-10 NA 21.0 35.8
#> # … with 2,910 more rows
Created on 2023-02-10 by the reprex package (v2.0.1)

Error in match.arg(method), where it comes from?

I am running this code in order to get a bound test on stock datas.
Everything is working until I made my ardlBoundOrders and get the following error : Error in match.arg(method) : 'arg' must be of length 1
Where this error comes from ? Is that possible this comes from the merged dataset (since I run the code without any problem when I only use excel imported dataset) ? How to fix it ?
Thanks for your help!
Here is the script :
library(quantmod)
library(ggplot2)
library(plotly)
library(dLagM)
tickers = c("DIS", "GILD", "AMZN", "AAPL")
stocks<-getSymbols(tickers,
from = "1994-01-01",
to = "2022-02-01",
periodicity = "monthly",
src = "yahoo")
DISclose<-DIS[, 4:4]
GILDclose<-GILD[, 4:4]
AMZNclose<-AMZN[, 4:4]
AAPLclose<-AAPL[, 4:4]
newdata <- merge(DATA, DISclose)
formula <- DIS.Close ~ USDEUR+CPI+CONSCONF+FEDFUNDS+HOUST+UNRATE+INDPRO+VIX+SPY+CLI
ARDLfit <- ardlDlm(formula = formula, data = newdata, p = 10, q = 10)
summary(ARDLfit)
orders3 <- ardlBoundOrders(data = newdata, formula =
formula, ic = "BIC", max.p = 2, max.q = 2)
p <- data.frame(orders3$q, orders3$p) + 1
Boundtest<- ardlBound(data = DATA, formula =
formula2, p=p , ECM = TRUE)
par(mfrow=c(1,1))
disney<-Boundtest[["ECM"]][["EC.t"]]
plot(disney, type="l")
Update :
I think I found something :
When I merge my datas, it square them by allocating each of the stocks data on each of my rows datas. An example would be more explicit :
Here is the variable DATA :
> DATA
# A tibble: 337 × 12
Date VIX USDEUR CPI CONSCONF FEDFUNDS HOUST SPY INDPRO UNRATE
<dttm> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1994-01-01 00:00:00 10.6 0.897 146. 101. 3.05 1272 28.8 67.1 6.6
2 1994-02-01 00:00:00 14.9 0.895 147. 101. 3.25 1337 28.0 67.1 6.6
3 1994-03-01 00:00:00 20.5 0.876 147. 101. 3.34 1564 26.7 67.8 6.5
4 1994-04-01 00:00:00 13.8 0.877 147. 101. 3.56 1465 27.1 68.2 6.4
5 1994-05-01 00:00:00 13.0 0.859 148. 101. 4.01 1526 27.6 68.5 6.1
6 1994-06-01 00:00:00 15.0 0.846 148. 101. 4.25 1409 26.7 69.0 6.1
7 1994-07-01 00:00:00 11.1 0.818 148. 101. 4.26 1439 27.8 69.1 6.1
8 1994-08-01 00:00:00 12.0 0.818 149 101. 4.47 1450 28.8 69.5 6
9 1994-09-01 00:00:00 14.3 0.810 149. 101. 4.73 1474 27.9 69.7 5.9
10 1994-10-01 00:00:00 14.6 0.793 149. 101. 4.76 1450 28.9 70.3 5.8
# … with 327 more rows, and 2 more variables: CLI <dbl>, SPYr <dbl>
Here is the variable merged newdata :
CLI SPYr DIS.Close
1 100.52128 0.0000000000 15.53738
2 100.70483 -0.0291642024 15.53738
3 100.83927 -0.0473966064 15.53738
4 100.92260 0.0170457821 15.53738
5 100.95804 0.0159393078 15.53738
6 100.95186 -0.0293319435 15.53738
7 100.91774 0.0391511218 15.53738
8 100.86948 0.0381206253 15.53738
9 100.80795 -0.0311470101 15.53738
10 100.72614 0.0346814791 15.53738
11 100.60322 -0.0398155024 15.53738
12 100.42905 -0.0006857954 15.53738
13 100.19862 0.0418493643 15.53738
In fact, for each row of DATA there is the first row of DIScloseand so on for the 2nd, the 3rd... Then my dataset go from x row to x^2 row.
I did some research to fix this problem, and I should match both datasets through by="matchingIDinbothdataset" but I do not have matching ID. Is there a solution ?
Thank you in advance.

How to use rownames_to_column with dates

I am trying to convert my yahoo price downloads to a "tidy" format, but in the reprex below, the dates lose their format and are converted to rownumbers. Stated differently, how do I convert from xts to tibble and preserve the dates?
prices <- getSymbols("QQQ", adjustOHLC = TRUE, auto.assign = FALSE) %>%
as_tibble() %>%
rownames_to_column(var = "Date")
head(prices)
To keep it in all in a single tidyverse pipe, simply convert to a data frame first:
library(quantmod)
library(tibble)
getSymbols("QQQ", adjustOHLC = TRUE, auto.assign = FALSE) %>%
as.data.frame() %>%
rownames_to_column(var = "Date") %>%
as_tibble()
#> # A tibble: 3,419 x 7
#> Date QQQ.Open QQQ.High QQQ.Low QQQ.Close QQQ.Volume QQQ.Adjusted
#> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 2007-01-03 43.5 44.1 42.5 43.2 167689500 38.3
#> 2 2007-01-04 43.3 44.2 43.2 44.1 136853500 39.1
#> 3 2007-01-05 44.0 44.0 43.5 43.8 138958800 38.9
#> 4 2007-01-08 43.9 44.1 43.6 43.9 106401600 38.9
#> 5 2007-01-09 44.0 44.3 43.6 44.1 121577500 39.1
#> 6 2007-01-10 44.0 44.7 43.8 44.6 121070100 39.6
#> 7 2007-01-11 44.7 45.2 44.7 45.1 174029800 40.0
#> 8 2007-01-12 45.0 45.3 45.0 45.3 104217300 40.2
#> 9 2007-01-16 45.3 45.4 45.1 45.3 95690500 40.1
#> 10 2007-01-17 45.1 45.3 44.8 44.9 127142600 39.8
#> # ... with 3,409 more rows
Created on 2020-08-02 by the reprex package (v0.3.0)
I think you should use index() on the .xts rather than rownames_to_column() on the tibble
library(quantmod)
library(dplyr)
price.xts <-getSymbols("QQQ", adjustOHLC = TRUE, auto.assign = FALSE)
price<-as_tibble(price.xts)
price$Date <-index(price.xts)
head(price)
tail(price)

interpolate data by a few columns

I have a large data frame with meteorological conditions at different locations (column radar_id), time (column date) and heights (column hgt).
I need to interpolate the data of each parameter (temp,u,v...) to a specific height (500 m above the ground for each radar- altitude_500 column) separately for each location (radar_id) and date.
I tried to do the approx command in dplyr pipes or splitting the data frame but it didn't work for me...
example of part of my data frame:
head (example)
radar_id date temp u v hgt W wind_ang temp_diff tw altitude_500
<chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Dagan 2014-03-02 18.8 -6.00 4.80 77 7.68 129. 5. -3.33 547
2 Dagan 2014-03-02 17.6 -2.40 9.30 742 9.60 166. 6 -9.20 547
3 Dagan 2014-03-02 16.2 3.10 15.4 1463 15.7 -169. 5.80 -10.4 547
4 Dagan 2014-03-03 16.2 0.900 -0.500 96 1.03 -60.9 -2.6 -0.971 547
5 Dagan 2014-03-03 13.0 3.10 -0.500 754 3.14 -80.8 -4.6 -2.39 547
6 Dagan 2014-03-03 10.8 8.10 4.10 1462 9.08 -117. -5.30 -5.01 547
I want to get a column with the y values from the approx command for each parameter (the x values are the height -hgt),at a specific height (by the altitude_500 column), after the data frame is grouped by radar_id and date .
Here's a dplyr solution. First, I define the data.
# Data
df <- read.table(text = "radar_id date temp u v hgt W wind_ang temp_diff tw altitude_500
1 Dagan 2014-03-02 18.8 -6.00 4.80 77 7.68 129. 5. -3.33 547
2 Dagan 2014-03-02 17.6 -2.40 9.30 742 9.60 166. 6 -9.20 547
3 Dagan 2014-03-02 16.2 3.10 15.4 1463 15.7 -169. 5.80 -10.4 547
4 Dagan 2014-03-03 16.2 0.900 -0.500 96 1.03 -60.9 -2.6 -0.971 547
5 Dagan 2014-03-03 13.0 3.10 -0.500 754 3.14 -80.8 -4.6 -2.39 547
6 Dagan 2014-03-03 10.8 8.10 4.10 1462 9.08 -117. -5.30 -5.01 547")
Then, I load the dplyr package.
# Load library
library(dplyr)
Finally, I group by both radar_id and date and perform a linear interpolation using approx to get the value at altitude_500 m for each column (except the grouping variables and hgt).
# Group then summarise
df %>%
group_by(radar_id, date) %>%
summarise_at(vars(-hgt), ~approx(hgt, ., xout = first(altitude_500))$y)
#> # A tibble: 2 x 10
#> # Groups: radar_id [1]
#> radar_id date temp u v W wind_ang temp_diff tw
#> <fct> <fct> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 Dagan 2014~ 18.0 -3.46 7.98 9.04 155. 5.71 -7.48
#> 2 Dagan 2014~ 14.0 2.41 -0.5 2.48 -74.5 -3.97 -1.94
#> # ... with 1 more variable: altitude_500 <dbl>
Created on 2019-08-21 by the reprex package (v0.3.0)
This assumes that there is only one value of altitude_500 for each radar_id -date pair.

tq_mutate() and Volume indicators in R

I am using the tidyquant package in R to calculate indicators for every symbol in the SP500.
As a sample of code:
stocks_w_price_indicators<- stocks2 %>%
group_by(symbol)%>%
tq_mutate(select=close,mutate_fun=RSI) %>%
tq_mutate(select=c(high,low,close),mutate_fun=CLV)
This works for price-based indicators, but not indicators that include volume.
I get "Evaluation error: argument "volume" is missing, with no default."
stocks_w_price_indicators<- stocks2 %>%
group_by(symbol)%>%
tq_mutate(select=close,mutate_fun=RSI) %>%
tq_mutate(select=c(high,low,close,volume),mutate_fun=CMF)
How can I get indicators that include volume to calculate properly?
There are a few functions from the TTR package that cannot be used with tidyquant. Reason being they need 3 inputs like adjRatios or need an HLC object and a volume column like the CMF function. Normally you would solve this by using the tq_mutate_xy function but this one cannot handle the HCL needed for the CMF function. If you would use the OBV function from TTR that needs a price and a volume column and works fine with tq_mutate_xy.
Now there are 2 options. One the CMF function needs to be adjusted to handle a (O)HLCV object. Or two, create your own function.
The last option is the fastest. Since the internals of the CMF function call on the CLV function you could use the first code block you have and extend it with a normal dplyr::mutate call to calculate the cmf.
# create function to calculate the chaikan money flow
tq_cmf <- function(clv, volume, n = 20){
runSum(clv * volume, n)/runSum(volume, n)
}
stocks_w_price_indicators <- stocks2 %>%
group_by(symbol) %>%
tq_mutate(select = close, mutate_fun = RSI) %>%
tq_mutate(select = c(high, low, close), mutate_fun = CLV) %>%
mutate(cmf = tq_cmf(clv, volume, 20))
# A tibble: 5,452 x 11
# Groups: symbol [2]
symbol date open high low close volume adjusted rsi clv cmf
<chr> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 MSFT 2008-01-02 35.8 36.0 35 35.2 63004200 27.1 NA -0.542 NA
2 MSFT 2008-01-03 35.2 35.7 34.9 35.4 49599600 27.2 NA 0.291 NA
3 MSFT 2008-01-04 35.2 35.2 34.1 34.4 72090800 26.5 NA -0.477 NA
4 MSFT 2008-01-07 34.5 34.8 34.2 34.6 80164300 26.6 NA 0.309 NA
5 MSFT 2008-01-08 34.7 34.7 33.4 33.5 79148300 25.7 NA -0.924 NA
6 MSFT 2008-01-09 33.4 34.5 33.3 34.4 74305500 26.5 NA 0.832 NA
7 MSFT 2008-01-10 34.3 34.5 33.8 34.3 72446000 26.4 NA 0.528 NA
8 MSFT 2008-01-11 34.1 34.2 33.7 33.9 55187900 26.1 NA -0.269 NA
9 MSFT 2008-01-14 34.5 34.6 34.1 34.4 52792200 26.5 NA 0.265 NA
10 MSFT 2008-01-15 34.0 34.4 34 34 61606200 26.2 NA -1 NA

Resources