I have a script to download data from yahoo-finance into R. It works well for every stock, but has a hard time with indexes. I am trying to run the index TNX, but It only gives me data from 5 consecutive days.
I've tried putting a "^" before the index, because that is what yahoo finance uses as a variable for indexes, and it doesn't work.
ticker <- "TNX"
start.date <- as.Date('2016-09-01')
getSymbols(ticker, src='yahoo', from=start.date)
Adj.Close <- get(ticker)[,6]
daily.returns <- ROC(Adj.Close, n=1, type='continuous')
When I put this in I get no errors, but when I view daily. returns I get this:
2019-04-22 NA
2019-04-23 -0.03306086
2019-04-24 0.00000000
2019-04-25 -0.03419136
2019-04-26 0.00000000
That's all. Of course, this code works very well on any other stocks, but I just can't figure out this one.
Thank you for your time, and even if you can't help, your desire to help is appreciated.
You're getting all the data that Yahoo has:
https://finance.yahoo.com/quote/TNX/history
You are using a ticker symbol that has been delisted that Yahoo hasn't completely unplugged.
Related
Both maxMATX and maxZIM return no observation, which I am very confused about.
Here is the code
library(tseries)
\#teries have all the Financial Data , hence we need to load it
data.ZIM\<- get.hist.quote("ZIM")
data.MATX\<- get.hist.quote("MATX")
data.ZIM\<-data.ZIM\[Sys.Date()-0:364\]
data.MATX\<-data.MATX\[Sys.Date()-0:364\]
head(data.ZIM)
head(data.MATX)
min(data.ZIM$Close)
max(data.ZIM$Close)
minZIM=data.ZIM\[data.ZIM$Close==24.34\]
maxZIM=data.ZIM\[data.ZIM$Close==88.62\]
data.ZIM\[data.ZIM$Close==88.62\]
minZIM
maxZIM
min(data.MATX$Close)
max(data.MATX$Close)
minMATX=data.MATX\[data.MATX$Close==60.07,\]
maxMATX=data.MATX\[data.MATX$Close==121.47,\]
minMATX
maxMATX
I was trying to extract the data from Tseries and I have faced difficulty when trying to print the row (or specifically I was trying to find the date of which the 52 weeks low and high was happening ).
Use which.min and which.max to find indexes of minimum and maximum close and use those to look up the time.
library(tseries)
data.ZIM <- get.hist.quote("ZIM", start = Sys.Date() - 364)
tmin <- time(data.ZIM)[which.min(data.ZIM$Close)]; tmin
## [1] "2021-03-31"
data.ZIM[tmin]
## Open High Low Close
## 2021-03-31 24.75 24.99 24.15 24.34
Does someone have a good idea how to get the return for a stock for a specific time period e.g. AAPL from 2000-01-01 to 2020-01-01. I know there is something like
periodReturn(AAPL,period='yearly',subset='2000::')
But this is giving me the yearly returns. I actually just want the whole return.
Fully in quantmod functions:
library(quantmod)
aapl <- getSymbols("AAPL", from = "2000-01-01", auto.assign = F)
# first and last get the first and last entry in the timeseries.
# select the close values
# Delt calculates the percent difference
Delt(Cl(first(aapl)), Cl(last(aapl)))
Delt.0.arithmetic
2020-07-08 94.39573
Or in simple maths:
as.numeric(Cl(last(aapl))) / as.numeric(Cl(first(aapl))) - 1
[1] 94.39573
I'm taking the close value of the fist entry. You might take the open, high or low of the day. This has some effect on the return first values in 2000 range from the low 3.63 to the high of 4.01. Depending on your choice the return will be between 104 and 93.9 times your starting capital.
I am working on building a model that can predict NFL games, and am looking to run full season simulations and generate expected wins and losses for each team.
Part of the model is based on a rating that changes each week based on whether or not a team lost. For example, lets say the Bills and Ravens each started Sundays game with a rating of 100, after the Ravens win, their rating now increases to 120 and the Bills decrease to 80.
While running the simulation, I would like to update the teams rating throughout in order to get a more accurate representation of the number of ways a season could play out, but am not sure how to include something like this within the loop.
My loop for the 2017 season.
full.sim <- NULL
for(i in 1:10000){
nflpredictions$sim.homewin <- with(nflpredictions, rbinom(nrow(nflpredictions), 1, homewinpredict))
nflpredictions$winner <- with(nflpredictions, ifelse(sim.homewin, as.character(HomeTeam), as.character(AwayTeam)))
winningteams <- table(nflpredictions$winner)
projectedwins <- data.frame(Team=names(winningteams), Wins=as.numeric(winningteams))
full.sim <- rbind(full.sim, projectedwins)
}
full.sim <- aggregate(full.sim$Wins, by= list(full.sim$Team), FUN = sum)
full.sim$expectedwins <- full.sim$x / 10000
full.sim$expectedlosses <- 16 - full.sim$expectedwins
This works great when running the simulation for 2017 where I already have the full seasons worth of data, but I am having trouble adapting for a model to simulate 2018.
My first idea is to create another for loop within the loop that iterates through the rows and updates the ratings for each week, something along the lines of
full.sim <- NULL
for(i in 1:10000){
for(i in 1:nrow(nflpredictions)){
The idea being to update a teams rating, then generate the win probability for the week using the GLM I have built, simulate who wins, and then continue through the entire dataframe. The only thing really holding me back is not knowing how to add a value to a row based on a row that is not directly above. So what would be the easiest way to update the ratings each week based on the result of the last game that team played in?
The dataframe is built like this, but obviously on a larger scale:
nflpredictions
Week HomeTeam AwayTeam HomeRating AwayRating HomeProb AwayProb
1 BAL BUF 105 85 .60 .40
1 NE HOU 120 90 .65 .35
2 BUF LAC NA NA NA NA
2 JAX NE NA NA NA NA
I hope I explained this well enough... Any input is greatly appreciated, thanks!
I am learning time series analysis with R and came across these 2 functions while learning. I do understand that the output of both of these is a periodic data defined by the frequency of period and the only difference I can see is the OHLC output option in the to.period().
Other than the OHLC when a particular of these functions is to be used?
to.period and all the to.minutes, to.weekly, to.quarterly are indeed meant for OHLC data.
If you take the function to.period it will take the open from the first day of the period, the close of the last day of the period and the highest high / lowest low of the specified period. These functions work very well together with the quantmod / tidyquant / quantstrat packages. See code example 1.
If you give the to.period non-OHLC data, but a timeseries with 1 data column, you still get a sort of OHLC back. See code example 2.
Now period.apply is is more interesting. Here you can supply your own functions to be applied on the data. Especially in combination with endpoints this can be a powerful function in timeseries data if you want to aggregate your function to different time periods. The index is mostly specified with endpoints, since with endpoints you can create the index you need to get to higher time levels (from day to week / etc etc). See code example 3 and 4.
Remember to use matrix functions with period.apply if you have more than 1 column of data since xts is basicly a matrix and an index. See code example 5.
More info on this data.camp course.
library(xts)
data(sample_matrix)
zoo.data <- zoo(rnorm(31)+10,as.Date(13514:13744,origin="1970-01-01"))
# code example 1
to.quarterly(sample_matrix)
sample_matrix.Open sample_matrix.High sample_matrix.Low sample_matrix.Close
2007 Q1 50.03978 51.32342 48.23648 48.97490
2007 Q2 48.94407 50.33781 47.09144 47.76719
# same as to.quarterly
to.period(sample_matrix, period = "quarters")
sample_matrix.Open sample_matrix.High sample_matrix.Low sample_matrix.Close
2007 Q1 50.03978 51.32342 48.23648 48.97490
2007 Q2 48.94407 50.33781 47.09144 47.76719
# code example 2
to.period(zoo.data, period = "quarters")
zoo.data.Open zoo.data.High zoo.data.Low zoo.data.Close
2007-03-31 9.039875 11.31391 7.451139 10.35057
2007-06-30 10.834614 11.31391 7.451139 11.28427
2007-08-19 11.004465 11.31391 7.451139 11.30360
# code example 3 using base standard deviation in the chosen period
period.apply(zoo.data, endpoints(zoo.data, on = "quarters"), sd)
2007-03-31 2007-06-30 2007-08-19
1.026825 1.052786 1.071758
# self defined function of summing x + x for the period
period.apply(zoo.data, endpoints(zoo.data, on = "quarters"), function(x) sum(x + x) )
2007-03-31 2007-06-30 2007-08-19
1798.7240 1812.4736 993.5729
# code example 5
period.apply(sample_matrix, endpoints(sample_matrix, on = "quarters"), colMeans)
Open High Low Close
2007-03-31 50.15493 50.24838 50.05231 50.14677
2007-06-30 48.47278 48.56691 48.36606 48.45318
I'll get straight to the point: I have been given some data sets in .csv format containing regularly logged sensor data from a machine. However, this data set also contains measurements taken when the machine is turned off, which I would like to separate from the data logged from when it is turned on. To subset the relevant data I also have a file containing start and end times of these shutdowns. This file is several hundred rows long.
Examples of the relevant files for this problem:
file: sensor_data.csv
sens_name,time,measurement
sens_A,17/12/11 06:45,32.3321
sens_A,17/12/11 08:01,36.1290
sens_B,17/12/11 05:32,17.1122
sens_B,18/12/11 03:43,12.3189
##################################################
file: shutdowns.csv
shutdown_start,shutdown_end
17/12/11 07:46,17/12/11 08:23
17/12/11 08:23,17/12/11 09:00
17/12/11 09:00,17/12/11 13:30
18/12/11 01:42,18/12/11 07:43
To subset data in R, I have previously used the subset() function with simple conditions which has worked fine, but I don't know how to go about subsetting sensor data which fall outside multiple shutdown date ranges. I've already formatted the date and time data using as.POSIXlt().
I'm suspecting some scripting may be involved to come up with a good solution, but I'm afraid I am not yet experienced enough to handle this type of data.
Any help, advice, or solutions will be greatly appreciated. Let me know if there's anything else needed for a solution.
I prefer POSIXct format for ranges within data frames. We create an index for sensors operating during shutdowns with t < shutdown_start OR t > shutdown_end. With these ranges we can then subset the data as necessary:
posixct <- function(x) as.POSIXct(x, format="%d/%m/%y %H:%M")
sensor_data$time <- posixct(sensor_data$time)
shutdowns[] <- lapply(shutdowns, posixct)
ind1 <- sapply(sensor_data$time, function(t) {
sum(t < shutdowns[,1] | t > shutdowns[,2]) == length(sensor_data$time)})
#Measurements taken when shutdown
sensor_data[ind1,]
# sens_name time measurement
# 1 sens_A 2011-12-17 06:45:00 32.3321
# 3 sens_B 2011-12-17 05:32:00 17.1122
#Measurements taken when not shutdown
sensor_data[!ind1,]
# sens_name time measurement
# 2 sens_A 2011-12-17 08:01:00 36.1290
# 4 sens_B 2011-12-18 03:43:00 12.3189