Obtaining only numeric output from viewFinancials without additional text - r

I calculated dividend yield of Microsoft the following way:
# load financial data for MSFT
library(quantmod)
getFinancials('MSFT')
# calculate dividend yield for MSFT
as.numeric(first(-viewFinancials(MSFT.f, type='CF', period='A',subset = NULL)['Total Cash Dividends Paid',]/viewFinancials(MSFT.f, type='BS', period='A',subset = NULL)['Total Common Shares Outstanding',]))
Here is the output
Annual Cash Flow Statement for MSFT
Annual Balance Sheet for MSFT
[1] 1.40958
How is it possible to have only the numeric output 1.40958 without the additional text Annual Cash Flow Statement for MSFT and Annual Balance Sheet for MSFT? Is there a way to suppress those?

The two strings, "Annual Cash Flow Statement for MSFT" and "Annual Balance Sheet for MSFT" are messages from viewFinancials. They are not attached to the result in any way.
R> dy <- as.numeric(first(-viewFinancials(MSFT.f, type='CF', period='A',subset = NULL)['Total Cash Dividends Paid',]/viewFinancials(MSFT.f, type='BS', period='A',subset = NULL)['Total Common Shares Outstanding',]))
Annual Cash Flow Statement for MSFT
Annual Balance Sheet for MSFT
R> dy
[1] 1.40958
If you want to squelch the messages, use suppressMessages().
R> suppressMessages(dy <- as.numeric(first(-viewFinancials(MSFT.f, type='CF', period='A',subset = NULL)['Total Cash Dividends Paid',]/viewFinancials(MSFT.f, type='BS', period='A',subset = NULL)['Total Common Shares Outstanding',])))
R> dy
[1] 1.40958
R>

Related

Finding GICS Sector using Rblpapi in R

I am trying to replace a column in my data with the output of function: bdp(column + "equity", "GICS_SECTOR NAME")
Require(Rblpapi)
#Create raw data example
ticker <- c(2,3,4,5,6)
sector <- c(NA,NA,NA,NA,NA)
dataraw <- data.frame(ticker, random)
dataraw$sector <- bdp("dataraw$ticker Equity", "GICS_SECTOR_NAME")
This does not work due to "" making it text only and I have to add the word "Equity" e.g. IBM Equity.
An example of it working perfectly would be bdp("IBM Equity", "GICS_SECTOR_NAME")
You can add the "Equity" part using paste and use the resulting ticker as an argument to bdp:
#Create raw data example
ticker <- c("IBM", "AAPL", "MSFT", "FB")
sector <- c(NA,NA,NA,NA)
df <- data.frame(ticker, sector)
df$ticker_full <- paste(df$ticker, "US Equity", sep = " ")
conn <- Rblpapi::blpConnect()
sectors <- bdp(securities = df$ticker_full,
fields = "GICS_SECTOR_NAME")
> print(sectors)
GICS_SECTOR_NAME
IBM US Equity Information Technology
AAPL US Equity Information Technology
MSFT US Equity Information Technology
FB US Equity Communication Services
df$sector <- sectors$GICS_SECTOR_NAME
> print(df)
ticker sector ticker_full
1 IBM Information Technology IBM US Equity
2 AAPL Information Technology AAPL US Equity
3 MSFT Information Technology MSFT US Equity
4 FB Communication Services FB US Equity

R Time Series horizon doesn't reach the the end date of my data

I have continuous daily data from 2015-12-01 to 2016-07-01.Since it's already been aggregated, these are unique dates with correspondent quantity.
However, whenever i convert the data into time series using ts() and plot.ts() it. The potting shows my tseries never reach 2016-07-01, but only stops around 2016-0. While ggplot of df data shows the ideal graph.
df
tseries = ts(df$F, start = c(2015,12,01), end = c(2016,07,01), frequency = 208)
plot.ts(tseries)
tseries
dfplotting
tldr; You misspecified the frequency (and start) arguments in ts; either provide correct frequency/start arguments, or use the zoo package instead.
There is a misunderstanding about the role of frequency in ts. The frequency argument depends on the format of your data. For example, if you have daily data for a year then frequency would be 365, since one year has 365 days (ignoring leap years). start then has to match the frequency of your data. For daily data, start needs to provide the day (out of 365) of your start date.
Let's generate a date sequence from 01/12/2015 until 01/07/2016 in 1 day steps.
dates <- seq(as.Date("2015/12/01"), as.Date("2016/07/01"), by = "1 day")
Next we generate some sample data for every day.
set.seed(2017);
x <- 100 - cumprod(1 + rnorm(length(dates), 0.01, 0.05))
We create a ts object and plot.
tseries <- ts(
x,
start = c(2015, as.numeric(format(dates[1], "%j"))),
frequency = 365)
plot.ts(tseries);
Note how as.numeric(format(dates[1], "%j")) is the day of the year that corresponds to the 01/12/2015; we can inspect tseries to confirm
tseries;
#Time Series:
#Start = c(2015, 335)
#End = c(2016, 183)
#Frequency = 365
# [1] 98.91829 98.91165 98.86055 98.94935 98.94251 98.90804 99.00404 98.99416
# [9] 98.99744 98.90906 98.87945 98.78015 98.81349 98.78344 98.85828 98.77868
# [17] 98.79591 98.70452 98.76475 98.80961 98.78934 98.82882 98.70564 98.79937
# [25] 98.74176 98.72526 98.73731 98.68448 98.72841 98.70941 98.74059 98.71282
# [33] 98.66201 98.71089 98.67883 98.45837 98.52216 98.36389 98.37027 98.37203
# [41] 98.44200 98.42452 98.46624 98.52957 98.53985 98.44419 98.42480 98.33157
# [49] 98.23825 98.11436 97.96746 98.10811 98.07050 98.05331 98.18894 98.26369
# [57] 98.10984 97.95076 97.95309 97.96750 97.86906 97.82784 97.95063 97.92361
# [65] 97.80495 97.75671 97.78848 97.70829 97.60183 97.67609 97.51230 97.57353
# [73] 97.62052 97.79321 97.57016 97.18006 97.25852 97.25187 97.13964 97.10216
# [81] 97.44889 97.22116 97.20823 97.16990 97.13855 97.10450 96.93565 96.79920
# [89] 96.82153 96.84342 96.86261 96.78734 97.08032 97.19380 97.30147 97.21610
# [97] 97.23463 97.39229 97.47829 97.56498 97.53851 97.34443 97.32700 97.18775
#[105] 97.29549 97.53480 97.44546 97.38659 97.23025 97.29599 97.36905 97.38267
#[113] 97.30787 97.36199 97.41452 97.31361 96.98821 96.92238 96.78418 96.87618
#[121] 96.78397 96.79244 96.80496 96.87619 97.00473 96.77162 96.61101 96.91671
#[129] 96.70001 96.75187 96.85348 96.77186 96.59937 96.72491 96.81188 96.80928
#[137] 96.71012 96.39952 96.49581 96.50415 96.56627 96.51843 96.72303 96.59825
#[145] 96.71873 96.65840 96.69296 96.86833 96.79887 96.71224 96.75685 96.76380
#[153] 96.63228 96.74893 96.51374 96.66589 96.61319 96.75718 96.62919 96.39169
#[161] 96.47066 96.59940 96.52173 96.50408 96.22667 96.03279 96.01529 95.80778
#[169] 95.77858 95.90350 96.15438 95.86239 95.99304 95.89340 95.70911 95.74620
#[177] 95.66125 95.81266 95.56044 95.36743 95.35368 95.54013 95.17587 95.07042
#[185] 94.67955 94.57888 94.57412 94.37998 94.36650 93.92591 93.82484 93.71398
#[193] 93.66040 93.60182 93.51263 93.54927 92.93701 92.72238 92.63081 92.85764
#[201] 92.61470 92.28663 92.28687 92.17086 92.31845 92.11827 91.72824 91.78567
#[209] 91.68942 91.34734 90.89702 91.32072 91.11538 90.83474
Note that the 335th day of 2015 corresponds to the 1st of December 2015, and the 183rd day of 2016 to the 1st of July 2016.
As an alternative we can also use the zoo package, which has the benefit that we don't have to work out the proper frequency (and start) arguments.
library(zoo);
tseries <- zoo(x, dates);
plot.zoo(tseries);

Extract specific stocks based on their price from Quant Mod in R

I'm running quantmod and I want to read in a list of stocks and their prices as of a certain date. I then want to keep those stocks that meet a specific threshold.
My code starts:
library(quantmod)
s = c("AAPL","FB","GOOG", "CRM")
e = new.env() #environment in which to store data
getSymbols(s, src="yahoo", env=e)
prices = do.call(merge, eapply(e, Cl)[s])
today = prices["2017-04-07",]
today
The output is:
AAPL.Close FB.Close GOOG.Close CRM.Close
2017-04-07 143.34 140.78 824.67 84.38
I want to keep only those with a price >140 so it should read:
AAPL.Close GOOG.Close
2017-04-07 143.34 824.67
You already have prices as of a certain date in today. So you just need to subset today by the columns with a close price > 140. You can do that by subsetting the columns with a logical vector.
R> today[, today > 140]
AAPL.Close FB.Close GOOG.Close
2017-04-07 143.34 140.78 824.67

extract results after Post query

I am trying to extract automatically electricity offers from this site.Once I set the postcode (i.e: 300) , I can download(manually) the pdf files
I am using httr package :
library(httr)
qr<- POST("http://www.qenergy.com.au/What-Are-Your-Options",
query=list(postcode=3000))
res <- htmlParse(content(qr))
The problem is that the files urls are not in the query response. Any help please.
Try this
library(httr)
qr<- POST("http://www.qenergy.com.au/What-Are-Your-Options",
encode="form",
body=list(postcode=3000))
res <- content(qr)
pdfs <- as(res['//a[contains(#href, "pdf")]/#href'], "character")
head(pdfs)
# [1] "flux-content/qenergy/pdf/VIC price fact sheet jemena distribution zone business/Jemena-Freedom-Biz-5-Day-Time-of-Use-A210.pdf"
# [2] "flux-content/qenergy/pdf/VIC price fact sheet jemena distribution zone business/Jemena-Freedom-Biz-7-Day-Time-of-Use-A250.pdf"
# [3] "flux-content/qenergy/pdf/VIC price fact sheet jemena distribution zone business/Jemena-Freedom-Biz-Single-Rate-CL.pdf"
# [4] "flux-content/qenergy/pdf/VIC price fact sheet jemena distribution zone business/Jemena-Freedom-Biz-Single-Rate.pdf"
# [5] "flux-content/qenergy/pdf/VIC price fact sheet united energy distribution zone business/United-Freedom-Biz-5-Day-Time-of-Use.pdf"
# [6] "flux-content/qenergy/pdf/VIC price fact sheet united energy distribution zone business/United-Freedom-Biz-7-Day-Time-of-Use.pdf"

Quantmod FRED Metadata in R

library(quantmod)
getSymbols("GDPC1",src = "FRED")
I am trying to extract the numerical economic/financial data in FRED but also the metadata. I am trying to chart CPI and have the meta data as a labels/footnotes. Is there a way to extract this data using the quantmod package?
Title: Real Gross Domestic Product
Series ID: GDPC1
Source: U.S. Department of Commerce: Bureau of Economic Analysis
Release: Gross Domestic Product
Seasonal Adjustment: Seasonally Adjusted Annual Rate
Frequency: Quarterly
Units: Billions of Chained 2009 Dollars
Date Range: 1947-01-01 to 2014-01-01
Last Updated: 2014-06-25 7:51 AM CDT
Notes: BEA Account Code: A191RX1
Real gross domestic product is the inflation adjusted value of the
goods and services produced by labor and property located in the
United States.
For more information see the Guide to the National Income and Product
Accounts of the United States (NIPA) -
(http://www.bea.gov/national/pdf/nipaguid.pdf)
You can use the same code that's in the body of getSymbools.FRED, but change ".csv" to ".xls", then read the metadata you're interested in from the .xls file.
library(gdata)
Symbol <- "GDPC1"
FRED.URL <- "http://research.stlouisfed.org/fred2/series"
tmp <- tempfile()
download.file(paste0(FRED.URL, "/", Symbol, "/downloaddata/", Symbol, ".xls"),
destfile=tmp)
read.xls(tmp, nrows=17, header=FALSE)
# V1 V2
# 1 Title: Real Gross Domestic Product
# 2 Series ID: GDPC1
# 3 Source: U.S. Department of Commerce: Bureau of Economic Analysis
# 4 Release: Gross Domestic Product
# 5 Seasonal Adjustment: Seasonally Adjusted Annual Rate
# 6 Frequency: Quarterly
# 7 Units: Billions of Chained 2009 Dollars
# 8 Date Range: 1947-01-01 to 2014-01-01
# 9 Last Updated: 2014-06-25 7:51 AM CDT
# 10 Notes: BEA Account Code: A191RX1
# 11 Real gross domestic product is the inflation adjusted value of the
# 12 goods and services produced by labor and property located in the
# 13 United States.
# 14
# 15 For more information see the Guide to the National Income and Product
# 16 Accounts of the United States (NIPA) -
# 17 (http://www.bea.gov/national/pdf/nipaguid.pdf)
Instead of hardcoding nrows=17, you can use grep to search for the row that has the headers of the data, and subset to only include rows before that.
dat <- read.xls(tmp, header=FALSE, stringsAsFactors=FALSE)
dat[seq_len(grep("DATE", dat[, 1])-1),]
unlink(tmp) # remove the temp file when you're done with it.
FRED has a straightforward, well-document json interface http://api.stlouisfed.org/docs/fred/ which provides both metadata and time series data for all of its economic series. Access requires a FRED account and api key but these are available on request from http://api.stlouisfed.org/api_key.html .
The excel descriptive data you asked for can be retrieved using
get.FRSeriesTags <- function(seriesNam)
{
# seriesNam = character string containing the ID identifying the FRED series to be retrieved
#
library("httr")
library("jsonlite")
# dummy FRED api key; request valid key from http://api.stlouisfed.org/api_key.html
apiKey <- "&api_key=abcdefghijklmnopqrstuvwxyz123456"
base <- "http://api.stlouisfed.org/fred/"
seriesID <- paste("series_id=", seriesNam,sep="")
fileType <- "&file_type=json"
#
# get series descriptive data
#
datType <- "series?"
url <- paste(base, datType, seriesID, apiKey, fileType, sep="")
series <- fromJSON(url)$seriess
#
# get series tag data
#
datType <- "series/tags?"
url <- paste(base, datType, seriesID, apiKey, fileType, sep="")
tags <- fromJSON(url)$tags
#
# format as excel descriptive rows
#
description <- data.frame(Title=series$title[1],
Series_ID = series$id[1],
Source = tags$notes[tags$group_id=="src"][1],
Release = tags$notes[tags$group_id=="gen"][1],
Frequency = series$frequency[1],
Units = series$units[1],
Date_Range = paste(series[1, c("observation_start","observation_end")], collapse=" to "),
Last_Updated = series$last_updated[1],
Notes = series$notes[1],
row.names=series$id[1])
return(t(description))
}
Retrieving the actual time series data would be done in a similar way. There are several json packages available for R but jsonlite works particularly well for this application.
There's a bit more to setting this up than the previous answer but perhaps worth it if you do much with FRED data.

Resources