Importing excel balance sheet into R - r

I am trying to import into R an excel spreadsheet of a balance sheet, I am trying to import it so it would look more or less like the balance sheet does now.
Assets 2011, 2010, 2009
Non current assets 32.322 3.111
intangible assets 12,222
something along those lines. I am also trying to import the second tab also which is a different balance sheet. The idea is that I will probably have 50 or more balance sheets. Would this be inefficient for analysis?
I am only interested in a few of the same variables from each of the balance sheet (think, current assets, non current assets for all the years etc.) is it possible to import just specific rows and columns from an excel spead sheet?
For instance just import;
A) Non current assets 32.322 3.111 322
B) Current assets 345 543 2.233
etc? - the row names do not change so could I use a function to do this?

Look at quantmod!
library(quantmod)
library(xlsx)
getFin("GS")
gs_BS <- GS.f$BS$A
str(gs_BS)
#num [1:42, 1:4] 106533 NA 113003 71883 NA ...
#- attr(*, "dimnames")=List of 2
# ..$ : chr [1:42] "Cash & Equivalents" "Short Term Investments" "Cash and Short Term Investments" "Accounts Receivable - Trade, Net" ...
# ..$ : chr [1:4] "2015-12-31" "2014-12-31" "2013-12-31" "2012-12-31"
#- attr(*, "col_desc")= chr [1:4] "As of 2015-12-31" "As of 2014-12-31" "As of 2013-12-31" "As of 2012-12-31"
transposed <- t(gs_BS)
write.xlsx(transposed, "C:\\Users\\your_path_here\\Desktop\\bal_sheet.xlsx", row.names=FALSE)
transp <- read.xlsx("C:\\Users\\your_path_here\\Desktop\\bal_sheet.xlsx" , sheetName="Sheet1")
transp$year <- c("2015","2014","2013","2012")")
This is good too.
require(quantmod)
equityList <- read.csv("EquityList.csv", header = FALSE, stringsAsFactors = FALSE)
names(equityList) <- c ("Ticker")
for (i in 1 : length(equityList$Ticker)) {
temp<-getFinancials(equityList$Ticker[i],src="google",auto.assign=FALSE)
write.csv(temp$IS$A,paste(equityList$Ticker[i],"_Income_Statement(Annual).csv",sep=""))
write.csv(temp$IS$A,paste(equityList$Ticker[i],"_Balance_Sheet(Annual).csv",sep=""))
write.csv(temp$IS$A,paste(equityList$Ticker[i],"_Cash_Flow(Annual).csv",sep=""))
write.csv(temp$IS$A,paste(equityList$Ticker[i],"_Income_Statement(Quarterly).csv",sep=""))
write.csv(temp$IS$A,paste(equityList$Ticker[i],"_Balance_Sheet(Quaterly).csv",sep=""))
write.csv(temp$IS$A,paste(equityList$Ticker[i],"_Cash_Flow(Quaterly).csv",sep=""))
}
Also, check this out.
https://msperlin.github.io/pafdR/importingInternet.html
There are other ways to do very similar things.

Related

Using read_html in R to get Russell 3000 holdings?

I was wondering if there is a way to automatically pull the Russell 3000 holdings from the iShares website in R using the read_html (or rvest) function?
url: https://www.ishares.com/us/products/239714/ishares-russell-3000-etf
(all holdings in the table on the bottom, not just top 10)
So far I have had to copy and paste into an Excel document, save as a CSV, and use read_csv to create a tibble in R of the ticker, company name, and sector.
I have used read_html to pull the SP500 holdings from wikipedia, but can't seem to figure out the path I need to put in to have R automatically pull from iShares website (and there arent other reputable websites I've found with all ~3000 holdings). Here is the code used for SP500:
read_html("https://en.wikipedia.org/wiki/List_of_S%26P_500_companies")%>%
html_node("table.wikitable")%>%
html_table()%>%
select('Symbol','Security','GICS Sector','GICS Sub Industry')%>%
as_tibble()
First post, sorry if it is hard to follow...
Any help would be much appreciated
Michael
IMPORTANT
According to the Terms & Conditions listed on BlackRock's website (here):
Use any robot, spider, intelligent agent, other automatic device, or manual process to search, monitor or copy this Website or the reports, data, information, content, software, products services, or other materials on, generated by or obtained from this Website, whether through links or otherwise (collectively, "Materials"), without BlackRock's permission, provided that generally available third-party web browsers may be used without such permission;
I suggest you ensure you are abiding by those terms before using their data in a way that violates those rules. For educational purposes, here is how data would be obtained:
First you need to get to the actual data (not the interactive javascript). How familiar are you with the devloper function on your browser? If you navigate through the webiste and track the traffic, you will notice a large AJAX:
https://www.ishares.com/us/products/239714/ishares-russell-3000-etf/1467271812596.ajax?tab=all&fileType=json
This is the data you need (all). After locating this, it is just cleaning the data. Example:
library(jsonlite)
#Locate the raw data by searching the Network traffic:
url="https://www.ishares.com/us/products/239714/ishares-russell-3000-etf/1467271812596.ajax?tab=all&fileType=json"
#pull the data in via fromJSON
x<-jsonlite::fromJSON(url,flatten=TRUE)
>Large list (10.4 Mb)
#use a comination of `lapply` and `rapply` to unlist, structuring the results as one large list
y<-lapply(rapply(x, enquote, how="unlist"), eval)
>Large list (50677 elements, 6.9Mb)
y1<-y[1:15]
> str(y1)
List of 15
$ aaData1 : chr "MSFT"
$ aaData2 : chr "MICROSOFT CORP"
$ aaData3 : chr "Equity"
$ aaData.display: chr "2.95"
$ aaData.raw : num 2.95
$ aaData.display: chr "109.41"
$ aaData.raw : num 109
$ aaData.display: chr "2,615,449.00"
$ aaData.raw : int 2615449
$ aaData.display: chr "$286,156,275.09"
$ aaData.raw : num 2.86e+08
$ aaData.display: chr "286,156,275.09"
$ aaData.raw : num 2.86e+08
$ aaData14 : chr "Information Technology"
$ aaData15 : chr "2588173"
**Updated: In case you are unable to clean the data, here you are:
testdf<- data.frame(matrix(unlist(y), nrow=50677, byrow=T),stringsAsFactors=FALSE)
#Where we want to break the DF at (every nth row)
breaks <- 17
#number of rows in full DF
nbr.row <- nrow(testdf)
repeats<- rep(1:ceiling(nbr.row/breaks),each=breaks)[1:nbr.row]
#split DF from clean-up
newDF <- split(testdf,repeats)
Result:
> str(head(newDF))
List of 6
$ 1:'data.frame': 17 obs. of 1 variable:
..$ matrix.unlist.y...nrow...50677..byrow...T.: chr [1:17] "MSFT" "MICROSOFT CORP" "Equity" "2.95" ...
$ 2:'data.frame': 17 obs. of 1 variable:
..$ matrix.unlist.y...nrow...50677..byrow...T.: chr [1:17] "AAPL" "APPLE INC" "Equity" "2.89" ...
$ 3:'data.frame': 17 obs. of 1 variable:
..$ matrix.unlist.y...nrow...50677..byrow...T.: chr [1:17] "AMZN" "AMAZON COM INC" "Equity" "2.34" ...
$ 4:'data.frame': 17 obs. of 1 variable:
..$ matrix.unlist.y...nrow...50677..byrow...T.: chr [1:17] "BRKB" "BERKSHIRE HATHAWAY INC CLASS B" "Equity" "1.42" ...
$ 5:'data.frame': 17 obs. of 1 variable:
..$ matrix.unlist.y...nrow...50677..byrow...T.: chr [1:17] "FB" "FACEBOOK CLASS A INC" "Equity" "1.35" ...
$ 6:'data.frame': 17 obs. of 1 variable:
..$ matrix.unlist.y...nrow...50677..byrow...T.: chr [1:17] "JNJ" "JOHNSON & JOHNSON" "Equity" "1.29" ...

Can't get metadata from dataframe using DataframeSource in tm for R

I have a dataframe with the following variables:
doc_id text URL author date forum
When I run
samplecorpus <- Corpus(DataframeSource(sampledataframe))
the documentation says I should get a corpus with all of the extra variables added as document-level metadata.
https://rdrr.io/rforge/tm/man/DataframeSource.html
http://finzi.psych.upenn.edu/R/library/tm/html/DataframeSource.html
Instead, I get a corpus that has all of the right documents in the right order, but all of their metadata is blank. I need this metadata to filter the documents for future analysis.
Someone else asked a similar question, but it never got answered...
In tm version a readTabular() replacement tm package DataframeSource () ignores my other columns as metadata
Does anyone have any ideas on how to fix this?
Thanks!
The documentation for tm explains this if you dig down (see ??tm::DublicCore). From the docs:
A corpus has two types of metadata. Corpus metadata ("corpus") contains corpus specific metadata in form of tag-value pairs. Document level metadata ("indexed") contains document specific metadata but is stored in the corpus as a data frame. Document level metadata is typically used for semantic reasons (e.g., classifications of documents form an own entity due to some high-level information like the range of possible values) or for performance reasons (single access instead of extracting metadata of each document). The latter can be seen as a from of indexing, hence the name "indexed". Document metadata ("local") are tag-value pairs directly stored locally at the individual documents.
DataframeSource automatically assigns only the the corpus metadata*. For example, see what the following prints:
library(tm)
data <- data.frame(doc_id = c(234345345, 1299),
text = c("The Prince and the Pauper",
"Little Women"),
author = c('Mark Twain', 'Louisa May Alcott'),
date = c(1881, 1868),
stringsAsFactors = FALSE)
samplecorpus <- Corpus(DataframeSource(data))
meta(samplecorpus)
# Or even
meta(samplecorpus[1], tag = 'author')
In order to assign metadata at the document level, you can work with meta to change tags. Bizarrely, this only works if you use VCorpus. So changing the above slightly, you can do:
samplecorpus <- VCorpus(DataframeSource(data))
# Can now set document metadata tags
meta(samplecorpus[[1]], tag = 'author') <- 'Mark Twain'
*EDIT:
Contemplating further (and responding to OP's comment), I agree that the documentation is not a completely accurate description of the package's observed behavior. The quoted documentation above refers to three levels (Corpus, indexed document level, and local document level), which in my example appear to correspond to samplecorpus, samplecorpus[1], and samplecorpus[[1]], respectively. If this correct, then the metadata is being assigned by DataframeSource at the promised level (if somewhat vaguely, as they never specified which document-level). However, the docs also claims the indexed document level is stored as a data frame and local as tag-value pairs, but both are stored as lists. Confusing. I can only conclude that this is either a bug in the package implementation or an error in the docs.
Barring contacting the package authors to clear this up (not a bad idea), I would propose the following workaround:
samplecorpus <- VCorpus(DataframeSource(data))
transfer_metadata <- function(x, i, tag){
return(meta(x[i], tag=tag)[[tag]])
}
tags <- colnames(data)
tags <- tags[! tags %in% c('doc_id', 'text')]
for(i in 1:length(samplecorpus)){
for (tag in tags){
meta(samplecorpus[[i]], tag=tag) <- transfer_metadata(samplecorpus, i=i, tag=tag)
}
}
You have to check if everything is loaded correctly. I made an example docs data.frame so you can see how it works. I used the same column names you have and added 1 extra (tags). Based on this example you might check if you have an issue somewhere.
docs <- data.frame(doc_id = c("doc_1", "doc_2"),
text = c("This is a text.", "This another one."),
url = c("https://stackoverflow.com/questions/52433344/cant-get-metadata-from-dataframe-using-dataframesource-in-tm-for-r",
"https://stackoverflow.com/questions/52433344/cant-get-metadata-from-dataframe-using-dataframesource-in-tm-for-r"),
author = c("Emi", "Emi"),
date = as.Date(c("2018-09-20", "2018-09-21")),
forum = c("stackoverflow", "stackoverflow"),
tags = c("r", "tm"),
stringsAsFactors = T)
# use Corpus or VCorpus
my_corpus <- Corpus(DataframeSource(docs))
meta(my_corpus)
url author date
1 https://stackoverflow.com/questions/52433344/cant-get-metadata-from-dataframe-using-dataframesource-in-tm-for-r Emi 2018-09-20
2 https://stackoverflow.com/questions/52433344/cant-get-metadata-from-dataframe-using-dataframesource-in-tm-for-r Emi 2018-09-21
forum tags
1 stackoverflow r
2 stackoverflow tm
my_index <- meta(my_corpus, "tags") == "r"
inspect(my_corpus[my_index])
<<SimpleCorpus>>
Metadata: corpus specific: 1, document level (indexed): 5
Content: documents: 1
doc_1
This is a text.
Now beware there is a difference in how meta is treated. If you do str(my_corpus) you will see the following:
List of 2
$ doc_1:List of 2
..$ content: chr "This is a text."
..$ meta :List of 7
.. ..$ author : chr(0)
.. ..$ datetimestamp: POSIXlt[1:1], format: "2018-09-21 08:55:44"
.. ..$ description : chr(0)
.. ..$ heading : chr(0)
.. ..$ id : chr "doc_1"
.. ..$ language : chr "en"
.. ..$ origin : chr(0)
.. ..- attr(*, "class")= chr "TextDocumentMeta"
..- attr(*, "class")= chr [1:2] "PlainTextDocument" "TextDocument"
$ doc_2:List of 2
......
The meta info you see here is from meta(my_corpus, type = "local"). The metadata loaded with DataframeSource is of type indexed, meta(my_corpus, type = "indexed")
Page 5 of the vignette is important to read and experiment with to see all the different options that meta and DublinCore.

xpathApply: How to pass multiple paths or nodes?

# parse PubMed data
library(XML) # xpath
library(rentrez) # entrez_fetch
pmids <- c("25506969","25032371","24983039","24983034","24983032","24983031","26386083",
"26273372","26066373","25837167","25466451","25013473","23733758")
# Above IDs are mix of Books and journal articles
# ID# 23733758 is an journal article and has No abstract
data.pubmed <- entrez_fetch(db = "pubmed", id = pmids, rettype = "xml",
parsed = TRUE)
abstracts <- xpathApply(data.pubmed, "//Abstract", xmlValue)
names(abstracts) <- pmids
It works well if every record has an abstract. However, when there is a PMID (#23733758) without a pubmed abstract ( or a book article or something else), it skips resulting in an error 'names' attribute [5] must be the same length as the vector [4]
Q: How to pass multiple paths/nodes so that, I can extract journal article, Books or Reviews ?
UPDATE : hrbrmstr solution helps to address the NA. But,can xpathApply take multiple nodes like c(//Abstract, //ReviewArticle , etc etc )?
You have to attack it one tag element up:
abstracts <- xpathApply(data.pubmed, "//PubmedArticle//Article", function(x) {
val <- xpathSApply(x, "./Abstract", xmlValue)
if (length(val)==0) val <- NA_character_
val
})
names(abstracts) <- pmids
str(abstracts)
List of 5
## $ 24019382: chr "Adenocarcinoma of the lung, a leading cause of cancer death, frequently displays mutational activation of the KRAS proto-oncoge"| __truncated__
## $ 23927882: chr "Mutations in components of the mitogen-activated protein kinase (MAPK) cascade may be a new candidate for target for lung cance"| __truncated__
## $ 23825589: chr "Aberrant activation of MAP kinase signaling pathway and loss of tumor suppressor LKB1 have been implicated in lung cancer devel"| __truncated__
## $ 23792568: chr "Sorafenib, the first agent developed to target BRAF mutant melanoma, is a multi-kinase inhibitor that was approved by the FDA f"| __truncated__
## $ 23733758: chr NA
Per your comment with an alternate way to do this:
str(xpathApply(data.pubmed, '//PubmedArticle//Article', function(x) {
xmlValue(xmlChildren(x)$Abstract)
}))
## List of 5
## $ : chr "Adenocarcinoma of the lung, a leading cause of cancer death, frequently displays mutational activation of the KRAS proto-oncoge"| __truncated__
## $ : chr "Mutations in components of the mitogen-activated protein kinase (MAPK) cascade may be a new candidate for target for lung cance"| __truncated__
## $ : chr "Aberrant activation of MAP kinase signaling pathway and loss of tumor suppressor LKB1 have been implicated in lung cancer devel"| __truncated__
## $ : chr "Sorafenib, the first agent developed to target BRAF mutant melanoma, is a multi-kinase inhibitor that was approved by the FDA f"| __truncated__
## $ : chr NA

Sorting xts data to look like panel data in R

I need to use 'PerformanceAnalytics' package of R and to use this package, it requires me to convert the data into xts data. The data can be downloaded from this link: https://drive.google.com/file/d/0B8usDJAPeV85elBmWXFwaXB4WUE/edit?usp=sharing . Hence, I have created an xts data by using the following commands:
data<-read.csv('monthly.csv')
dataxts <- xts(data[,-1],order.by=as.Date(data$datadate,format="%d/%m/%Y"))
But after doing this, it looses the panel data structure. I tried to sort the xts data to get it back in panel data form but failed.
Can anyone please help me to reorganize the xts data to look like a panel data. I need to sort them by firm id (gvkey) and data(datadate).
xts objects are sorted by time index only. They cannot be sorted by anything else.
I would encourage you to split your data.frame into a list, by gvkey. Then convert each list element to xts and remove the columns that do not vary across time, storing them as xtsAttributes. You might also want to consider using the yearmon class, since you're dealing with monthly data.
You will have to determine how you want to encode non-numeric, time-varying values, since you cannot mix types in xts objects.
Data <- read.csv('monthly.csv', nrow=1000, as.is=TRUE)
DataList <- split(Data, Data$gvkey)
xtsList <- lapply(DataList, function(x) {
attrCol <- c("iid","tic","cusip","conm","exchg","secstat","tpci",
"cik","fic","conml","costat","idbflag","dldte")
numCol <- c("ajexm","ajpm","cshtrm","prccm","prchm","prclm",
"trfm", "trt1m", "rawpm", "rawxm", "cmth", "cshom", "cyear")
toEncode <- c("isalrt","curcdm")
y <- xts(x[,numCol], as.Date(x$datadate,format="%d/%m/%Y"))
xtsAttributes(y) <- as.list(x[1,attrCol])
y
})
Each list element is now an xts object, and is much more compact, since you do not repeat completely redundant data. And you can easily run analysis on each gvkey via lapply and friends.
> str(xtsList[["1004"]])
An ‘xts’ object on 1983-01-31/2012-12-31 containing:
Data: num [1:360, 1:13] 3.38 3.38 3.38 3.38 3.38 ...
- attr(*, "dimnames")=List of 2
..$ : NULL
..$ : chr [1:13] "ajexm" "ajpm" "cshtrm" "prccm" ...
Indexed by objects of class: [Date] TZ: UTC
xts Attributes:
List of 13
$ iid : int 1
$ tic : chr "AIR"
$ cusip : int 361105
$ conm : chr "AAR CORP"
$ exchg : int 11
$ secstat: chr "A"
$ tpci : chr "0"
$ cik : int 1750
$ fic : chr "USA"
$ conml : chr "AAR Corp"
$ costat : chr "A"
$ idbflag: chr "D"
$ dldte : chr ""
And you can access the attributes via xtsAttributes:
> xtsAttributes(xtsList[["1004"]])$fic
[1] "USA"
> xtsAttributes(xtsList[["1004"]])$tic
[1] "AIR"
An efficient way to achieve this goal is to covert the Panel Data (long format) into wide format using 'reshape2' package. After performing the estimations, convert it back to long format or panel data format. Here is an example:
library(foreign)
library(reshape2)
dd <- read.dta("DDA.dta") // DDA.dta is Stata data; keep only date, id and variable of interest (i.e. three columns in total)
wdd<-dcast(dd, datadate~gvkey) // gvkey is the id
require(PerformanceAnalytics)
wddxts <- xts(wdd[,-1],order.by=as.Date(wdd$datadate,format= "%Y-%m-%d"))
ssd60A<-rollapply(wddxts,width=60,SemiDeviation,by.column=TRUE,fill=NA) // e.g of rolling window calculation
ssd60A.df<-as.data.frame(ssd60A.xts) // convert dataframe to xts
ssd60A.df$datadate=rownames(ssd60A.df) // insert time index
lssd60A.df<-melt(ssd60A.df, id.vars=c('datadate'),var='gvkey') // convert back to panel format
write.dta(lssd60A.df,"ssd60A.dta",convert.factors = "string") // export as Stata file
Then simply merge it with the master database to perform some regression.

Writing a Simple Triplet Matrix to a File?

I am using the tm package to compute term-document-matrix for a dataset, I now have to write the term-document-matrix to a file but when I use the write functions in R I am getting a error.
Here is the code which I am using and the error I am getting:
data("crude")
tdm <- TermDocumentMatrix(crude, control = list(weighting = weightTfIdf, stopwords = TRUE))
dtm <- DocumentTermMatrix(crude, control = list(weighting = weightTfIdf, stopwords = TRUE))
and this is the error while I use the write.table command on this data:
Error in cat(list(...), file, sep, fill, labels, append) : argument 1 (type 'list') cannot be handled by 'cat'
I understand that tbm is a object of type Simple Triplet Matrix, but how can I write this to a simple text file.
I think I might be misunderstanding the question, but if all you want to do is export the term document matrix to a file, then how about this:
m <- inspect(tdm)
DF <- as.data.frame(m, stringsAsFactors = FALSE)
write.table(DF)
Is that what you're after mate?
Hope that helps a little,
Tony Breyal
Should the file be "human-readable"? If not, use dump, dput, or save. If so, convert your list into a data.frame.
Edit: You can convert your list into a matrix if each list element is equal length by doing matrix(unlist(list.name), nrow=length(list.name[[1]])) or something like that (or with plyr).
Why aren't you doing your SVM analysis in R (e.g. with kernlab)?
Edit 2: Ok, I looked at your data, and it isn't easy to convert into a matrix because the list elements aren't equal length:
> is.list(tdm)
[1] TRUE
> str(tdm)
List of 7
$ i : int [1:1475] 15 29 151 152 173 205 215 216 227 228 ...
$ j : int [1:1475] 1 1 1 1 1 1 1 1 1 1 ...
$ v : Named num [1:1475] 3.32 4.32 2.32 2 2.32 ...
..- attr(*, "names")= chr [1:1475] "1.50" "16.00" "barrel," "barrel." ...
$ nrow : int 985
$ ncol : int 20
$ dimnames :List of 2
..$ Terms: chr [1:985] "(bpd)" "(bpd)." "(gcc)" "(it) appears to be nearing a crossroads with regard to\nderegulation, both as it pertains to investments and imports," ...
..$ Docs : chr [1:20] "127" "144" "191" "194" ...
$ Weighting: chr [1:2] "term frequency - inverse document frequency" "tf-idf"
- attr(*, "class")= chr [1:2] "TermDocumentMatrix" "simple_triplet_matrix"
In order to convert this to a matrix, you will need to either take elements of this list (e.g. i, j) or else do some other manipulation.
Edit 3: Just to conclude my commentary here: these objects are intended to be used with the inspect function (see the package vignette).
As discussed, in order to use a function like write.table, you will need to convert your list into a matrix, which requires some manipulation of that list such that you have several vectors of equal length. Looking at the structure of these tm objects: this will be very difficult to do, and I suggest you work with the helper functions that are included with that package.
dtmMatrix <- as.matrix(dtm)
write.csv(dtmMatrix, 'mydata.csv')
This certainly does the work. However, when I tried it on a very large DTM (25000 by 35000), it gave errors relating to lack of memory space.
I used the following method:
dtm <- DocumentTermMatrix(corpus)
dtm1 <- removeSparseTerms(dtm,0.998) ##max allowed sparsity 0.998
m <- inspect(dtm1)
DF <- as.data.frame(m, stringsAsFactors = FALSE)
write.csv(DF,"mydata0.998sparse.csv")
Which reduced the size of the document term matrix to a great extent!
Here you can increase the max allowable sparsity (closer to 1) to include more terms in DF.

Resources