Download VIX futures prices from CBOE - r

I am trying to get historical prices for VIX futures by downloading all the CSV files on this page (http://cfe.cboe.com/Products/historicalVIX.aspx). Here is the code I am using to do this:
library(XML)
#Extract all links for url
url <- "http://cfe.cboe.com/Products/historicalVIX.aspx"
doc <- htmlParse(url)
links <- xpathSApply(doc, "//a/#href")
free(doc)
#Filter out URLs ending with csv and complete the link.
links <- links[substr(links, nchar(links) - 2, nchar(links)) == "csv"]
links <- paste("http://cfe.cboe.com", links, sep="")
#Peform read.csv on each url in links, skipping the first two URLs as they are not relevant.
c <- lapply(links[-(1:2)], read.csv, header = TRUE)
I get the error:
Error in read.table(file = file, header = header, sep = sep, quote = quote, :
more columns than column names
Upon further investigation, I realize this is because some of the CSV files are formatted differently. If I load the URL links[9] manually, I see that the first row has this disclaimer:
CFE data is compiled for the .......use of CFE data is subject to the Terms and Conditions of CBOE's Websites.
Most of the other files (e.g.links[8] and links[10]) are fine so it seems this has been randomly inserted. Is there some R magic that can be done to handle this?
Thank you.

I have a getSymbols.cfe method in my qmao package (for the getSymbols function in quantmod package) that will make this a lot easier.
#install.packages('qmao', repos='http://r-forge.r-project.org')
library(qmao)
This is from the examples section of ?getSymbols.cfe (please read the help page as the function has a few arguments that you may want to be different than the defaults)
getSymbols(c("VX_U11", "VX_V11"),src='cfe')
#all contracts expiring in 2010 and 2011.
getSymbols("VX",Months=1:12,Years=2010:2011,src='cfe')
#getSymbols("VX",Months=1:12,Years=10:11,src='cfe') #same
And it's not just for VIX
getSymbols(c("VM","GV"),src='cfe') #The mini-VIX and Gold vol contracts expiring this month
If you're not familiar with getSymbols, by default it stores the data in your .GlobalEnv and return the name of the object that was saved.
> getSymbols("VX_Z12", src='cfe')
[1] "VX_Z12"
> tail(VX_Z12)
VX_Z12.Open VX_Z12.High VX_Z12.Low VX_Z12.Close VX_Z12.Settle VX_Z12.Change VX_Z12.Volume VX_Z12.EFP VX_Z12.OpInt
2012-10-26 19.20 19.35 18.62 18.87 18.9 0.0 22043 15 71114
2012-10-31 18.55 19.50 18.51 19.46 19.5 0.6 46405 319 89674
2012-11-01 19.35 19.35 17.75 17.87 17.9 -1.6 40609 2046 95720
2012-11-02 17.90 18.65 17.55 18.57 18.6 0.7 42592 1155 100691
2012-11-05 18.60 20.15 18.43 18.86 18.9 0.3 28136 110 102746
2012-11-06 18.70 18.85 17.75 18.06 18.1 -0.8 35599 851 110638
Edit
I see now that I did not answer your question, but rather pointed you to another way to get the same error! A simple way to make your code work, is to make a wrapper for read.csv that uses readLines to see if the first row contains the disclaimer; if it does, skip the the first row, otherwise use read.csv as normal.
myRead.csv <- function(x, ...) {
if (grepl("Terms and Conditions", readLines(x, 1))) { #is the first row the disclaimer?
read.csv(x, skip=1, ...)
} else read.csv(x, ...)
}
L <- lapply(links[-(1:2)], myRead.csv, header = TRUE)
I also applied that patch to getSymbols.cfe. You can get the latest version of qmao (1.3.11) using svn checkout (see this post if you need help with that), or, you can wait until R-Forge builds it for you which usually happens pretty quickly, but could take up to a couple of days.

Related

R: read_csv reads numeric entries as logical - parsing col_logical instead of col_double

I am new to R.
I wrote a code for an assignment which reads several csv files and binds it into a data frame and then according to the id, calculates the mean of either nitrate or sulfate.
Data sample:
Date sulfate nitrate ID
<date> <dbl> <dbl> <dbl>
1 2003-10-06 7.21 0.651 1
2 2003-10-12 5.99 0.428 1
3 2003-10-18 4.68 1.04 1
4 2003-10-24 3.47 0.363 1
5 2003-10-30 2.42 0.507 1
6 2003-11-11 1.43 0.474 1
...
To read the files and create a data.frame, I wrote this function:
pollutantmean <- function (pollutant, id = 1:332) {
#creating a data frame from several files
file_m <- list.files(path = "specdata", pattern = "*.csv", full.names = TRUE)
read_file_m <- lapply(file_m, read_csv)
df_1 <- bind_rows(read_file_m)
# delete NAs
df_clean <- df_1[complete.cases(df_1),]
#select rows according to id
df_asid_clean <- filter(df_clean, ID %in% id)
#count the mean of the column
mean_result <- mean(df_asid_clean[, pollutant])
mean_result
However, when the read_csv function is applied, certain entries in nitrate column are read as col_logical, although the whole class of the column remains numeric and the entries are numeric. It seems that the code "expects" to receive logical value, although the real value is not.
Throughout the reading I get this message:
<...>
Parsed with column specification:
cols(
Date = col_date(format = ""),
sulfate = col_double(),
nitrate = col_logical(),
ID = col_double()
)
Warning: 41 parsing failures.
row col expected actual file
2055 nitrate 1/0/T/F/TRUE/FALSE 0.383 'specdata/288.csv'
2067 nitrate 1/0/T/F/TRUE/FALSE 0.355 'specdata/288.csv'
2073 nitrate 1/0/T/F/TRUE/FALSE 0.469 'specdata/288.csv'
2085 nitrate 1/0/T/F/TRUE/FALSE 0.144 'specdata/288.csv'
2091 nitrate 1/0/T/F/TRUE/FALSE 0.0984 'specdata/288.csv'
.... ....... .................. ...... ..................
See problems(...) for more details.
I tried to change the column class by writing
df_1[,nitrate] <- as.numeric(as.character(df_1[, nitrate])
, after binding rows, but it only shows that NAs are again introduced in step which calculates the mean.
What is wrong here, and how could I solve it?
Would appreciate your help!
UPDATE: tried to insert read_csv(col_types = list...), but I get "files" argument is not defined. As I understand, the R reads inside read_csv first, then lapply and because there is not "file" given at the time, it shows error.
The problem with readr::read_csv() failure in parsing the column types can be overcome by passing a col_types= argument in lapply(). We do this as follows:
pollutantmean <- function (directory,pollutant,id=1:332){
require(readr)
require(dplyr)
file_m <- list.files(path = directory, pattern = "*.csv", full.names = TRUE)[id]
read_file_m <- lapply(file_m, read_csv,col_types=list(col_date(),col_double(),
col_double(),col_integer()))
# rest of code goes here. Since I am a Community Mentor in the
# JHU Data Science Specialization, I am not allowed to post
# a complete solution to the programming assignment
}
Note that I use the [ form of the extract operator to subset the list of file names with the id vector that is an argument to the function, which avoids reading a lot of data that isn't necessary. This eliminates the need for the filter() statement in the code posted in the question.
With some additional programming statements to complete the assignment, the code in my answer produces the correct results for the three examples posted with the assignment, as listed below.
> pollutantmean("specdata","sulfate",1:10)
[1] 4.064128
> pollutantmean("specdata", "nitrate", 70:72)
[1] 1.706047
> pollutantmean("specdata", "nitrate", 23)
[1] 1.280833
Alternately we could implement lapply() with an anonymous function that also uses read_csv() as follows:
read_file_m <- lapply(file_m, function(x) {read_csv(x,col_types=list(col_date(),col_double(),
col_double(),col_integer()))})
NOTE: while it is completely understandable that students who have been exposed to the tidyverse would like to use it for the programming assignment, the fact that dplyr isn't introduced until the next course in the sequence (and readr isn't covered at all) makes it much more difficult to use for assignments in R Programming, especially the first assignment, where dplyr non-standard evaluation gives people fits. An example of this situation is yet another Stackoverflow question on pollutantmean().
With the read_csv update you don't need lapply, you can simply pass along the file path directly to read_csv as you already have defined.
Regarding the column types this can then be sen manually in the col_type argument:
col_type=cols(Date-col_date,sulfate=...)

Can't Download Index Data from Yahoo R

So, I want to download an index's data but can't get the data. The problem is that Yahoo.finance sees index's tickers as having a ^ before them, for example, ^VIX not VIX. Unfotunentally, my code doesn't like that. I can't find any functions or alternative ways to call this data. Also, I really don't want to change the platform from which I get my data from, that would be a HUGE pain in the butt for the rest of my code.
I have tried putting in the ^ with the asset, and it won't download data. I've also tried calling the data without the ^ and that gives the wrong data.
asset <- "VIX"
ticker <- "VIX"
start.date <- as.Date('2009-09-01')
getSymbols(ticker, src='yahoo', from=start.date)
Adj.Close <- get(ticker)[,6]
When I put this in I end up getting this message:
Error in get(ticker) : object '^VIX' not found
Thank you for your time, regardless of whether or not you know a solution.
I cannot confirm your issue. There is no need here for a leading "^" symbol.
Running
library(quantmod)
ticker <- "VIX"
start.date <- as.Date('2009-09-01')
getSymbols(ticker, src='yahoo', from=start.date)
will automatically store the output in an xts object called VIX
head(get(ticker))
# VIX.Open VIX.High VIX.Low VIX.Close VIX.Volume VIX.Adjusted
#2014-12-04 28200.4 30096.9 27953.0 28447.7 811330 28447.7
#2014-12-05 26551.2 27540.7 25974.0 26056.5 377529 26056.5
#2014-12-08 25231.9 26056.5 23582.8 23582.8 367585 23582.8
#2014-12-09 23582.8 23582.8 21274.0 21274.0 570963 21274.0
#2014-12-10 19789.7 20202.0 19212.5 19295.0 539795 19295.0
#2014-12-11 18635.3 19295.0 17398.5 17728.3 1053637 17728.3
Note that can you avoid the get step if you avoid auto-assigning the output of getSymbols to the current environment:
res <- getSymbols("VIX", src='yahoo', from=start.date, auto.assign = FALSE)
I assume Maurits seeks data for the VIX Index, in which case I find it necessary to include the carat in the ticker. Building on the previous answer we can see that a valid object is returned when we use the "^VIX" ticker.
library(quantmod)
start.date <- as.Date('2009-09-01')
ticker <- "^VIX"
getSymbols(ticker, src='yahoo', from=start.date)
[1] "^VIX"
However, getis unable to find that object:
head(get(ticker))
Error in get(ticker) : object '^VIX' not found
But, using "VIX" rather than "^VIX" with getreturns the desired result:
head(get("VIX"))
VIX.Open VIX.High VIX.Low VIX.Close VIX.Volume VIX.Adjusted
2009-09-01 26.01 29.23 26.00 29.15 0 29.15
2009-09-02 29.14 29.57 28.41 28.90 0 28.90
2009-09-03 28.90 28.90 26.98 27.10 0 27.10
2009-09-04 26.98 26.98 24.86 25.26 0 25.26
2009-09-08 25.26 26.15 25.26 25.62 0 25.62
2009-09-09 25.66 25.93 24.23 24.32 0 24.32

base R faster than readr for reading multiple CSV files

There is a lot of documentation on how to read multiple CSVs and bind them into one data frame. I have 5000+ CSV files I need to read in and bind into one data structure.
In particular I've followed the discussion here: Issue in Loading multiple .csv files into single dataframe in R using rbind
The weird thing is that base R is much faster than any other solution I've tried.
Here's what my CSV looks like:
> head(PT)
Line Timestamp Lane.01 Lane.02 Lane.03 Lane.04 Lane.05 Lane.06 Lane.07 Lane.08
1 PL1 05-Jan-16 07:17:36 NA NA NA NA NA NA NA NA
2 PL1 05-Jan-16 07:22:38 NA NA NA NA NA NA NA NA
3 PL1 05-Jan-16 07:27:41 NA NA NA NA NA NA NA NA
4 PL1 05-Jan-16 07:32:43 9.98 10.36 10.41 10.16 10.10 9.97 10.07 9.59
5 PL1 05-Jan-16 07:37:45 9.65 8.87 9.88 9.86 8.85 8.75 9.19 8.51
6 PL1 05-Jan-16 07:42:47 9.14 8.98 9.29 9.04 9.01 9.06 9.12 9.08
I've created three methods for reading in and binding the data. The files are located in a separate directory which I define as:
dataPath <- "data"
PTfiles <- list.files(path=dataPath, full.names = TRUE)
Method 1: Base R
classes <- c("factor", "character", rep("numeric",8))
# build function to load data
load_data <- function(dataPath, classes) {
tables <- lapply(PTfiles, read.csv, colClasses=classes, na.strings=c("NA", ""))
do.call(rbind, tables)
}
#clock
method1 <- system.time(
PT <- load_data(path, classes)
)
Method 2: read_csv
In this case I created a wrapper function for read_csv to use
#create wrapper function for read_csv
read_csv.wrap <- function(x) { read_csv(x, skip = 1, na=c("NA", ""),
col_names = c("tool", "timestamp", paste("lane", 1:8, sep="")),
col_types =
cols(
tool = col_character(),
timestamp = col_character(),
lane1 = col_double(),
lane2 = col_double(),
lane3 = col_double(),
lane4 = col_double(),
lane5 = col_double(),
lane6 = col_double(),
lane7 = col_double(),
lane8 = col_double()
)
)
}
##
# Same as method 1, just uses read_csv instead of read.csv
load_data2 <- function(dataPath) {
tables <- lapply(PTfiles, read_csv.wrap)
do.call(rbind, tables)
}
#clock
method2 <- system.time(
PT2 <- load_data2(path)
)
Method 3: read_csv + dplyr::bind_rows
load_data3 <- function(dataPath) {
tables <- lapply(PTfiles, read_csv.wrap)
dplyr::bind_rows(tables)
}
#clock
method3 <- system.time(
PT3 <- load_data3(path)
)
What I can't figure out, is why read_csv and dplyr methods are slower for elapsed time when they should be faster. The CPU time is decreased, but why would the elapsed time (file system) increase? What's going on here?
Edit - I added the data.table method as suggested in the comments
Method 4 data.table
library(data.table)
load_data4 <- function(dataPath){
tables <- lapply(PTfiles, fread)
rbindlist(tables)
}
method4 <- system.time(
PT4 <- load_data4(path)
)
The data.table method is the fastest from a CPU standpoint. But the question still stands on what is going on with the read_csv methods that makes them so slow.
> rbind(method1, method2, method3, method4)
user.self sys.self elapsed
method1 0.56 0.39 1.35
method2 0.42 1.98 13.96
method3 0.36 2.25 14.69
method4 0.34 0.67 1.74
I would do that in the terminal(Unix). I would put all files int the same folder and then navigate to that folder (in terminal), the use the following command to create only one CSV file:
cat *.csv > merged_csv_file.csv
One observation regarding this method is that the header of each file will show up in the middle of the observations. To solve this I would suggest you do:
Get just the header from the first file
head -2 file1.csv > merged_csv_file.csv
then skip the first "X" lines from the other files, with the folling command, where "X" is the number of lines to skip.
tail -n +3 -q file*.csv >> merged_csv_file.csv
-n +3 makes tail print lines from 3rd to the end, -q tells it not to print the header with the file name (read man), >> adds to the file, not overwrites it as >.
I might have found a related issue. I am reading in nested CSV data from some simulation output, where multiple columns have CSV formatted data as elements, which I need to unnest and reshape for analysis.
With simulations where I have many runs, this resulted in thousands of elements that needed to be parsed. Using map(.,read_csv) this would take hours to transform. When I rewrote my script to apply read.csv in a lambda function, the operation would complete in seconds.
I'm curious if there is some intermediate system I/O operation or error handling that creates a bottleneck you wouldn't run into with a single input file.

R: Read multiple files and label them based on the file name

I have a folder with about 400 files with the same structure. Each of these file contains 4 columns with no header, corresponding to 4 climate variables. I would need to include two new columns in each of these files, based on the name of the file. The structure of the name is MeteoData_PXCY, withX=CODE_PLOT and Y=CODE_COUNTRY. Once I have these two new columns I need to read all the files in one single dataset, and aggregate grouping by CODE_PLOT and CODE_COUNTRY to calculate mean values. Hence, the final output is 400 rows, one row per CODE_PLOT and CODE_COUNTRY.
Example file MeteoData_P1C1.csv
32509 33.91 2.9155 4494.5 13.46
32540 63.03 3.9718 6520.8 25.12
32568 71.68 8.7874 11587 58.67
32599 116.38 7.8683 13286 62.58
32629 31.12 16.097 23555 135.35
32660 56.56 16.481 21886 130.24
32690 68.59 19.737 21677 141.15
32721 55.55 18.755 18830 117.39
32752 59.88 15.598 13579 81.06
32782 43.43 12.361 8622.2 54.57
Example MeteoData_P109C19.csv
32509 18.17 -0.70355 1413.5 9.93
32540 78 -0.43607 3574.6 10.46
32568 74.43 0.38645 7478.5 22.53
32599 73.19 2.5743 12352 42.85
32629 36.75 9.4852 21244 105.57
32660 61.65 13.753 21586 117.3
32690 86.16 15.991 20452 127.89
32721 98.02 12.713 13981 76.73
32752 32.14 9.9547 10850 53.13
32782 53.46 4.4252 5041.7 21.46
In the final output I should have this structure (without “;”):
Date; Precip; Temp; Rad; Pet; CODE_PLOT; CODE_COUNTRY
32540; 63.03; 3.9718; 6520.8; 25.12; 1; 1
32568; 71.68; 8.7874; 11587; 58.67; 9; 19
For the moment, I have:
setwd("MeteoData”) # Folder in which all the files are into
filenames <- list.files(pattern=".csv")
clim <- lapply(filenames, function(x) read.csv(file=x, header=FALSE))
You could put all your files in a new folder/directory, and then create a loop using list.files:
all.dfs <- list()
for(filename in list.files("some_dir")) {
all.dfs[[length(all.dfs) + 1]] <- read.table(filename, ...)
# put in read.table call the appropriate arguments, including column names for the existing data in the files
all.dfs[[length(all.dfs)]]$CODE_PLOT <- sub(".*P(\\d*)C(\\d*)\\.csv", "\\1", filename)
all.dfs[[length(all.dfs)]]$CODE_COUNTRY <- sub(".*P(\\d*)C(\\d*)\\.csv", "\\2", filename)
}
Then merging everything into one dataframe...
big.df <- do.call(rbind, all.dfs)
Haven't tested it but feel free to ask questions in comment.

Reading sdmx-xml files into a dataframe in R

I was wondering if anyone has managed to read SDMX-XML files into a dataframe. The file I’d like to read is https://www.ecb.europa.eu/stats/sdmx/icpf/1/data/pension_funds.xml (1mb).
I saved the file as “pensions_funds.xml” to the pwd and tried to use the XML package to read it:
fileName <- system.file("pensions", "pensions_funds.xml", package="XML")
parsed<-xmlTreeParse("pension_funds.xml",getDTD=F)
r<-xmlRoot(parsed)
tmp = xmlSApply(r, function(x) xmlSApply(x, xmlValue))
The few lines above basically follow the example here http://www.omegahat.org/RSXML/gettingStarted.html
but I think I would first need to somehow ignore the header (I have pasted below the first couple of pages of the file I’m trying to read). So I think the above might work but it starts from the wrong node for my purposes. I would like to grab the obs_values, indexed by their time_period and ref_area.
The first thing would be to find the right node and start there however I suspect I might be on a fool’s errand since I have limited knowledge of data formats and I’m not sure the XML package can be used for SDMX-XML files. Smarter people appear to have tried to do this
http://opensdmxdevelopers.wikispaces.com/RSDMX
I can’t find this package for download on its homepage here
https://r-forge.r-project.org/projects/rsdmx/
(I can’t see any link/download section but maybe I’m blind) and it seems to be early stages. The existence of the rsdmx suggests using the xml package to read sdmx might not be easy so I’m ready to give up at this stage unless anyone has had success with this. Actually I’m mainly interested in reading this file
http://www.ecb.europa.eu/stats/sdmx/bsi/1/data/outstanding_amounts.xml
But this is a 10mb file so I was starting smaller.
edit3
attempting sgibb's answer on large file using changes in Mischa's comment
library("XML")
url <- "http://www.ecb.europa.eu/stats/sdmx/bsi/1/data/outstanding_amounts.xml"
sdmxHandler <- function() {
## data.frame which stores results
data <- data.frame(stringsAsFactors=FALSE)
## counter to store current row
i <- 1
## temp value to store current REF_AREA
## temp value to store current REF_AREA
refArea <- NA
bsItem <- NA
bsCountSector <- NA
## handler subroutine for Obs tag
Obs <- function(name, attr) {
## found an Obs tag and now fill data.frame
data[i, "refArea"] <<- refArea
data[i, "timePeriod"] <<- as.numeric(attr["TIME_PERIOD"])
data[i, "obsValue"] <<- as.numeric(attr["OBS_VALUE"])
data[i, "bsItem"] <<- bsItem
data[i, "bsCountSector"] <<- bsCountSector
i <<- i + 1
}
## handler subroutine for Series tag
Series <- function(name, attr) {
refArea <<- attr["REF_AREA"]
bsItem <<- as.character(attr["BS_ITEM"])
bsCountSector <<- as.numeric(attr["BS_ITEM"])
}
return(list(getData=function() {return(data)},
Obs=Obs, Series=Series))
}
## run parser
df <- xmlEventParse(file(url), handlers=sdmxHandler())$getData()
Specification mandate value for attribute OBS_VALUE
attributes construct error
Couldn't find end of Start Tag Obs line 15108
Premature end of data in tag Series line 15041
Premature end of data in tag DataSet line 91
Premature end of data in tag CompactData line 2
Error: 1: Specification mandate value for attribute OBS_VALUE
2: attributes construct error
3: Couldn't find end of Start Tag Obs line 15108
4: Premature end of data in tag Series line 15041
5: Premature end of data in tag DataSet line 91
6: Premature end of data in tag CompactData line 2
In addition: There were 50 or more warnings (use warnings() to see the first 50)
edit2:
the answer from sgibb looks ideal and works perfectly on the smaller file. I tried to run it on
url <- http://www.ecb.europa.eu/stats/sdmx/bsi/1/data/outstanding_amounts.xml
(the 10mb file, original link corrected), with the only modification being the addition of two lines:
data[i, "bsItem"] <<- as.character(attr["BS_ITEM"])
data[i, "bsCountSector"] <<- as.numeric(attr["BS_COUNT_SECTOR"])
(these are additional id variables which are needed to identify a row in this larger dataset).
It ran for a few minutes then finished with this error:
Error: 1: Specification mandate value for attribute TIME_PE
2: attributes construct error
3: Couldn't find end of Start Tag Obs line 20743
4: Premature end of data in tag Series line 20689
5: Premature end of data in tag DataSet line 91
6: Premature end of data in tag CompactData line 2
In addition: There were 50 or more warnings (use warnings() to see the first 50)
The basic format of the data seems very similar so I thought this might work. The basic format of the 10mb file is as below:
<Series FREQ="M" REF_AREA="AT" ADJUSTMENT="N" BS_REP_SECTOR="A" BS_ITEM="A20" MATURITY_ORIG="A" DATA_TYPE="1" COUNT_AREA="U2" BS_COUNT_SECTOR="0000" CURRENCY_TRANS="Z01" BS_SUFFIX="E" TIME_FORMAT="P1M" COLLECTION="E">
<Obs TIME_PERIOD="1997-09" OBS_VALUE="275.3" OBS_STATUS="A" OBS_CONF="F"/>
<Obs TIME_PERIOD="1997-10" OBS_VALUE="275.9" OBS_STATUS="A" OBS_CONF="F"/>
<Obs TIME_PERIOD="1997-11" OBS_VALUE="276.6" OBS_STATUS="A" OBS_CONF="F"/>
edit1:
desired data format:
Ref_area time_period obs_value
At 2006 118
At 2007 119
…
Be 2006 101
…
Here’s the first bit of the data.
</Header>
DataSet xsi:schemaLocation="https://www.ecb.europa.eu/vocabulary/stats/icpf/1 https://www.ecb.europa.eu/stats/sdmx/icpf/1/structure/2011-08-11/sdmx-compact.xsd" xmlns="https://www.ecb.europa.eu/vocabulary/stats/icpf/1">
<Group DECIMALS="0" TITLE_COMPL="Austria, reporting institutional sector Insurance corporations and pension funds - Closing balance sheet - All financial assets and liabilities - counterpart area World (all entities), counterpart institutional sector Total economy including Rest of the World (all sectors) - Credit (resources/liabilities) - Non-consolidated, Current prices - Euro, Neither seasonally nor working day adjusted - ESA95 TP table Not applicable" UNIT_MULT="9" UNIT="EUR" ESA95TP_SUFFIX="Z" ESA95TP_DENOM="E" ESA95TP_CONS="N" ESA95TP_DC_AL="2" ESA95TP_CPSECTOR="S" ESA95TP_CPAREA="A1" ESA95TP_SECTOR="S125" ESA95TP_ASSET="F" ESA95TP_TRANS="LE" ESA95TP_PRICE="V" ADJUSTMENT="N" REF_AREA="AT"/><Series ESA95TP_SUFFIX="Z" ESA95TP_DENOM="E" ESA95TP_CONS="N" ESA95TP_DC_AL="2" ESA95TP_CPSECTOR="S" ESA95TP_CPAREA="A1" ESA95TP_SECTOR="S125" ESA95TP_ASSET="F" ESA95TP_TRANS="LE" ESA95TP_PRICE="V" ADJUSTMENT="N" REF_AREA="AT" COLLECTION="E" TIME_FORMAT="P1Y" FREQ="A"><Obs OBS_CONF="F" OBS_STATUS="E" OBS_VALUE="112" TIME_PERIOD="2008"/><Obs OBS_CONF="F" OBS_STATUS="E" OBS_VALUE="119" TIME_PERIOD="2009"/><Obs OBS_CONF="F" OBS_STATUS="E" OBS_VALUE="125" TIME_PERIOD="2010"/><Obs OBS_CONF="F" OBS_STATUS="E" OBS_VALUE="127" TIME_PERIOD="2011"/></Series><Group D
RSDMX seems to be in an early development state. IMHO there is no package available yet. But you could easily implement it on your own using the XML package. I would suggest to use xmlEventParse (see ?xmlEventParse for details):
EDIT: adapt example to changed requirements of outstanding_amounts.xml
EDIT2: add download.file
library("XML")
#url <- "http://www.ecb.europa.eu/stats/sdmx/icpf/1/data/pension_funds.xml"
url <- "http://www.ecb.europa.eu/stats/sdmx/bsi/1/data/outstanding_amounts.xml"
## download xml file to avoid download errors disturbing xmlEventParse
tmp <- tempfile()
download.file(url, tmp)
sdmxHandler <- function() {
## data.frame which stores results
data <- data.frame(stringsAsFactors=FALSE)
## counter to store current row
i <- 1
## temp value to store current REF_AREA, BS_ITEM and BS_COUNT_SECTOR
refArea <- NA
bsItem <- NA
bsCountSector <- NA
## handler subroutine for Obs tag
Obs <- function(name, attr) {
## found an Obs tag and now fill data.frame
data[i, "refArea"] <<- refArea
data[i, "bsItem"] <<- bsItem
data[i, "bsCountSector"] <<- bsCountSector
data[i, "timePeriod"] <<- as.Date(paste(attr["TIME_PERIOD"], "-01", sep=""), format="%Y-%m-%d")
data[i, "obsValue"] <<- as.double(attr["OBS_VALUE"])
## update current row
i <<- i + 1
}
## handler subroutine for Series tag
Series <- function(name, attr) {
refArea <<- attr["REF_AREA"]
bsItem <<- attr["BS_ITEM"]
bsCountSector <<- as.numeric(attr["BS_COUNT_SECTOR"])
}
return(list(getData=function() {return(data)},
Obs=Obs, Series=Series))
}
## run parser
df <- xmlEventParse(tmp, handlers=sdmxHandler())$getData()
head(df)
# refArea bsItem bsCountSector timePeriod obsValue
#1 DE A20 2210 12053 39.6
#2 DE A20 2210 12084 46.1
#3 DE A20 2210 12112 50.2
#4 DE A20 2210 12143 52.0
#5 DE A20 2210 12173 52.3
#6 DE A20 2210 12204 47.3
The package rsdmx allows you to read SDMX-ML files and coerce them as data.frame. It is now hosted at Github, and currently available in CRAN, but in case you can install easily it from GitHub with the following:
require("devtools")
install_github("rsdmx", "opensdmx")
Applying to your data, you can do the following:
sdmx <- readSDMX("http://www.ecb.europa.eu/stats/sdmx/bsi/1/data/outstanding_amounts.xml")
df <- as.data.frame(sdmx)
More examples are given in the rsdmx wiki
Note that its functionalities currently load the xml object into R, as a slot part of the SDMX R objects instantiated by rsdmx. In the future, we would like to investigate how rsdmx can use xmlEventParse (as suggested above by #sgibb) to read very large datasets.
library(XML)
xmlparsed <- xmlParse(file(url))
## obtain dataset node::
series_data <- getNodeSet(xmlparsed, "//Series")
if(length(series_data)==0){
datasetnode <- xmlChildren( xmlChildren(xmlparsed)[[1]])[[2]]
series_data<-xmlChildren(datasetnode)[ names(xmlChildren(datasetnode))=="Series"]
}
## prepare dataset
dataset.frame <- data.frame(matrix(ncol=3))
colnames(dataset.frame) <- c('REF_AREA', 'TIME_PERIOD', 'OBS_VALUE')
## loop over data
counter=1
for (i in 1: length(series_data)){
if('Obs'%in%names(xmlChildren(series_data[[i]])) ){ ## To ignore empty //Series nodes
for (j in 1: length(xmlChildren(series_data[[i]]))){
dataset.frame[counter,1] <- xmlAttrs(series_data[[i]])['REF_AREA']
dataset.frame[counter,2] <- xmlAttrs(series_data[[i]][[j]])['TIME_PERIOD']
dataset.frame[counter,3] <- xmlAttrs(series_data[[i]][[j]])['OBS_VALUE']
counter=counter+1
}
}
}
head(dataset.frame,5)

Resources