I'm trying to pull data directly from an API into R using the httr package. The API doesn't require any authentication, and accepts JSON strings of lat, long, elevation, variable sets, and time period to estimate climate variables for any location. This is my first time using an API, but the code below is what I've cobbled together from various Stack Overflow posts.
library(jsonlite)
library(httr)
url = "http://apibc.climatewna.com/api/clmApi"
body <- data.frame(lat = c(48.98,50.2), ##two example locations
lon = c(-115.02, -120),
el = c(1000,100),
prd = c("Normal_1961_1990.nrm","Normal_1961_1990.nrm"),
varYSM = c("Y","SST"))
requestBody <- toJSON(list("output" = body),auto_unbox = TRUE) ##convert to JSON string
result <- POST("http://apibc.climatewna.com/api/clmApi", ##post to API
body = requestBody,
add_headers(`Content-Type`="application/json"))
content(result)
I've tried various different versions of this (e.g. writing the JSON string manually, putting the body as a list in POST with encode = "json"), and it always runs, but the content always contains the below error message:
$Message
[1] "An error has occurred."
$ExceptionMessage
[1] "Object reference not set to an instance of an object."
$ExceptionType
[1] "System.NullReferenceException"
If I use GET and specify the variables directly in the URL
url = "http://apibc.climatewna.com/api/clmApi/LatLonEl?lat=48.98&lon=-115.02&el=1000&prd=Normal_1961_1990&varYSM=Y"
result <- GET(url)
content(result)
it produces the correct output, but then I can only obtain information for one location at a time. There isn't currently any public documentation about this API as it's very new, but I've attached a draft of the section explaining it using JS below. I would very much appreciate any help/suggestions on what I'm doing wrong!
Thank you!
The main problem is that jQuery.ajax encodes the data using jQuery.param before sending it to the API, so what it's sending looks something like [0][lat]=48.98&[0][lon]=-115.02.... I don't know of a package in R that does a similar encoding as jQuery.param, so we'll have to hack something together.
Modifying your example slightly:
library(httr)
body <- data.frame(lat = c(48.98,50.2), ##two example locations
lon = c(-115.02, -120),
el = c(1000,100),
prd = c("Normal_1961_1990","Normal_1961_1990"),
varYSM = c("Y","Y"))
Now, we do the encoding, like so:
out <- sapply(1:nrow(body), function(i) {
paste(c(
paste0(sprintf("[%d][lat]", i - 1), "=", body$lat[i]),
paste0(sprintf("[%d][lon]", i - 1), "=", body$lon[i]),
paste0(sprintf("[%d][el]", i - 1), "=", body$el[i]),
paste0(sprintf("[%d][prd]", i - 1), "=", body$prd[i]),
paste0(sprintf("[%d][varYSM]", i - 1), "=", body$varYSM[i])
), collapse = "&")
})
out <- paste(out, collapse = "&")
so now out is in a form that the API likes. Finally
result <- POST(url = "http://apibc.climatewna.com/api/clmApi", ##post to API
body = out,
add_headers(`Content-Type`="application/x-www-form-urlencoded"))
noting the Content-Type. We get
df <- do.call(rbind, lapply(content(result), as.data.frame, stringsAsFactors = FALSE))
str(df)
# 'data.frame': 2 obs. of 29 variables:
# $ lat : chr "48.98" "50.2"
# $ lon : chr "-115.02" "-120"
# $ elev : chr "1000" "100"
# $ prd : chr "Normal_1961_1990" "Normal_1961_1990"
# $ varYSM : chr "Y" "Y"
# $ MAT : chr "5.2" "8"
# $ MWMT : chr "16.9" "20.2"
# $ MCMT : chr "-6.7" "-5.6"
# $ TD : chr "23.6" "25.7"
# $ MAP : chr "617" "228"
# $ MSP : chr "269" "155"
# $ AHM : chr "24.7" "79.1"
# $ SHM : chr "62.9" "130.3"
# $ DD_0 : chr "690" "519"
# $ DD5 : chr "1505" "2131"
# $ DD_18 : chr "4684" "3818"
# $ DD18 : chr "60" "209"
# $ NFFD : chr "165" "204"
# $ bFFP : chr "150" "134"
# $ eFFP : chr "252" "254"
# $ FFP : chr "101" "120"
# $ PAS : chr "194" "34"
# $ EMT : chr "-36.3" "-32.7"
# $ EXT : chr "37.1" "41.2"
# $ Eref : chr "14.7" "13.6"
# $ CMD : chr "721" "862"
# $ MAR : chr "347" "679"
# $ RH : chr "57" "57"
# $ Version: chr "ClimateBC_API_v5.51" "ClimateBC_API_v5.51"
Related
I'm trying to get an API response using a URL that exists in an API data frame I just got, but I'm receiving the error:
"Error: arguments imply differing number of rows"
Does someone now how to fix it?
SCRIPT SO FAR
install.packages("jsonlite", "httr")
library(jsonlite)
library(httr)
### Generating URL and first request
url_deputados <- "https://dadosabertos.camara.leg.br/api/v2/deputados?idLegislatura=57&ordem=ASC&ordenarPor=nome"
get_deputados <- GET(url_deputados)
### Transforming it to text
deputados_text <- content(get_deputados, "text")
deputados_text
### Converting
deputados_json <- fromJSON(deputados_text, flatten = TRUE)
deputados_json
### Transforming it to table
deputados_df <- as.data.frame(deputados_json)
deputados_df
### And removing the two last columns which I don't need
deputados_df <- deputados_df[1:9]
### Now for the secondary requisitions, I'm creating a URL with the Id that is present in the first column of the data frame I just got
url_base <- "``https://dadosabertos.camara.leg.br/api/v2/``"
url_deputados <- "deputados/"
url_id <- deputados_df$dados.id
id_list <- c(url_id)
i <- 1
url <- paste0(url_base, url_deputados, id_list[i])
url
### Up to this point everything works, but I need to make sequential requests so I can GET the info for the next line of the existing data frame
while (i <= 531) {
print("Próxima página encontrada, baixando...")
get_deputados_id <- GET(paste0(url_base, url_deputados, id_list[i]))
deputados_id_text <- content(get_deputados_id, "text")
deputados_id_json <- fromJSON(deputados_id_text, flatten = TRUE)
deputados_id_df <- as.data.frame(deputados_id_json)
i <- i + 1
}
And this is where I receive the message error
When you run into problems at one line in your code, stop and look at the previous results. For instance, for me (since you didn't specify), I'm getting an error here:
deputados_df <- as.data.frame(deputados_json)
# Error in (function (..., row.names = NULL, check.rows = FALSE, check.names = TRUE, :
# arguments imply differing number of rows: 532, 3
So ... let's look at deputados_json:
str(deputados_json)
# List of 2
# $ dados:'data.frame': 532 obs. of 9 variables:
# ..$ id : int [1:532] 220593 204379 220714 221328 204560 204528 121948 74646 160508 136811 ...
# ..$ uri : chr [1:532] "https://dadosabertos.camara.leg.br/api/v2/deputados/220593" "https://dadosabertos.camara.leg.br/api/v2/deputados/204379" "https://dadosabertos.camara.leg.br/api/v2/deputados/220714" "https://dadosabertos.camara.leg.br/api/v2/deputados/221328" ...
# ..$ nome : chr [1:532] "Abilio Brunini" "Acácio Favacho" "Adail Filho" "Adilson Barroso" ...
# ..$ siglaPartido : chr [1:532] "PL" "MDB" "REPUBLICANOS" "PL" ...
# ..$ uriPartido : chr [1:532] "https://dadosabertos.camara.leg.br/api/v2/partidos/37906" "https://dadosabertos.camara.leg.br/api/v2/partidos/36899" "https://dadosabertos.camara.leg.br/api/v2/partidos/37908" "https://dadosabertos.camara.leg.br/api/v2/partidos/37906" ...
# ..$ siglaUf : chr [1:532] "MT" "AP" "AM" "SP" ...
# ..$ idLegislatura: int [1:532] 57 57 57 57 57 57 57 57 57 57 ...
# ..$ urlFoto : chr [1:532] "https://www.camara.leg.br/internet/deputado/bandep/220593.jpg" "https://www.camara.leg.br/internet/deputado/bandep/204379.jpg" "https://www.camara.leg.br/internet/deputado/bandep/220714.jpg" "https://www.camara.leg.br/internet/deputado/bandep/221328.jpg" ...
# ..$ email : chr [1:532] "dep.abiliobrunini#camara.leg.br" "dep.acaciofavacho#camara.leg.br" "dep.adailfilho#camara.leg.br" "dep.adilsonbarroso#camara.leg.br" ...
# $ links:'data.frame': 3 obs. of 2 variables:
# ..$ rel : chr [1:3] "self" "first" "last"
# ..$ href: chr [1:3] "https://dadosabertos.camara.leg.br/api/v2/deputados?idLegislatura=57&ordem=ASC&ordenarPor=nome" "https://dadosabertos.camara.leg.br/api/v2/deputados?idLegislatura=57&ordem=ASC&ordenarPor=nome&pagina=1&itens=1000" "https://dadosabertos.camara.leg.br/api/v2/deputados?idLegislatura=57&ordem=ASC&ordenarPor=nome&pagina=1&itens=1000"
(Hint: that's not unambiguously converted into a frame.)
My guess is that you just need to access $dados:
head(deputados_json$dados)
# id uri nome siglaPartido uriPartido siglaUf idLegislatura urlFoto email
# 1 220593 https://dadosabertos.camara.leg.br/api/v2/deputados/220593 Abilio Brunini PL https://dadosabertos.camara.leg.br/api/v2/partidos/37906 MT 57 https://www.camara.leg.br/internet/deputado/bandep/220593.jpg dep.abiliobrunini#camara.leg.br
# 2 204379 https://dadosabertos.camara.leg.br/api/v2/deputados/204379 Acácio Favacho MDB https://dadosabertos.camara.leg.br/api/v2/partidos/36899 AP 57 https://www.camara.leg.br/internet/deputado/bandep/204379.jpg dep.acaciofavacho#camara.leg.br
# 3 220714 https://dadosabertos.camara.leg.br/api/v2/deputados/220714 Adail Filho REPUBLICANOS https://dadosabertos.camara.leg.br/api/v2/partidos/37908 AM 57 https://www.camara.leg.br/internet/deputado/bandep/220714.jpg dep.adailfilho#camara.leg.br
# 4 221328 https://dadosabertos.camara.leg.br/api/v2/deputados/221328 Adilson Barroso PL https://dadosabertos.camara.leg.br/api/v2/partidos/37906 SP 57 https://www.camara.leg.br/internet/deputado/bandep/221328.jpg dep.adilsonbarroso#camara.leg.br
# 5 204560 https://dadosabertos.camara.leg.br/api/v2/deputados/204560 Adolfo Viana PSDB https://dadosabertos.camara.leg.br/api/v2/partidos/36835 BA 57 https://www.camara.leg.br/internet/deputado/bandep/204560.jpg dep.adolfoviana#camara.leg.br
# 6 204528 https://dadosabertos.camara.leg.br/api/v2/deputados/204528 Adriana Ventura NOVO https://dadosabertos.camara.leg.br/api/v2/partidos/37901 SP 57 https://www.camara.leg.br/internet/deputado/bandep/204528.jpg dep.adrianaventura#camara.leg.br
After that, make sure you fix your url_base, It should almost certainly not contain so many backticks.
Finally, you should do the same thing in your while loop:
while (i <= 531) {
get_deputados_id <- GET(paste0(url_base, url_deputados, id_list[i]))
deputados_id_text <- content(get_deputados_id, "text")
deputados_id_json <- fromJSON(deputados_id_text, flatten = TRUE)
# deputados_id_df <- as.data.frame(deputados_id_json)
deputados_id_df <- deputados_id_json$dados
i <- i + 1
}
I'd like to convert a sf object to a dataframe and restore it to its original state. But when I make the conversion of st_as_text(st_sfc(stands_sel$geometry)) is shows very difficult to retrieve it again. In my example:
library(sf)
# get AOI in shapefile
download.file(
"https://github.com/Leprechault/trash/raw/main/sel_stands_CMPC.zip",
zip_path <- tempfile(fileext = ".zip")
)
unzip(zip_path, exdir = tempdir())
# Open the file
setwd(tempdir())
stands_sel <- st_read("sel_stands_CMPC.shp")
st_crs(stands_sel) = 4326
# Extract geometry as text
geom <- st_as_text(st_sfc(stands_sel$geometry))
# Add the features
features <- st_drop_geometry(stands_sel)
str(features)
# Joining feature + geom
geo_df <- cbind(features, geom)
str(geo_df)
# 'data.frame': 2 obs. of 17 variables:
# $ CD_USO_SOL: num 2433 9053
# $ ID_REGIAO : num 11 11
# $ ID_PROJETO: chr "002" "344"
# $ PROJETO : chr "BARBA NEGRA" "CAMPO SECO"
# $ CD_TALHAO : chr "159A" "016A"
# $ CARACTERIS: chr "Plantio Comercial" "Plantio Comercial"
# $ CARACTER_1: chr "Produtivo" "Produtivo"
# $ CICLO : int 2 1
# $ ROTACAO : int 1 1
# $ DATA_PLANT: chr "2008/04/15" "2010/04/15"
# $ LOCALIDADE: chr "BARRA DO RIBEIRO" "DOM FELICIANO"
# $ ESPACAMENT: chr "3.00 x 2.50" "3.5 x 2.14"
# $ ESPECIE : chr "SALIGNA" "DUNNI"
# $ SISTEMA_PR: chr "MACRO ESTACA - EUCALIPTO" "SEMENTE - EUCALIPTO"
# $ VLR_AREA : num 8.53 28.07
# $ ID_UNIQUE : chr "BARBANEGRA159A" "CAMPOSECO016A"
# $ geom : chr "MULTIPOLYGON (((-51.21423 -30.35172, -51.21426 -30.35178, -51.2143 -30.35181, -51.21432 -30.35186, -51.21433 -3"| __truncated__
# Return to original format again
stands_sf <- geo_df %>%
st_geometry(geom) %>%
sf::st_as_sf(crs = 4326)
#Error in UseMethod("st_geometry") :
Please, any help to restore my stands_sf object to the orinal state?
I think geom isn't in a format st_geometry is expecting. st_as_text converted your geometry into WKT as discussed in the help:
The returned WKT representation of simple feature geometry conforms to the simple features access specification and extensions, known as EWKT, supported by PostGIS and other simple features implementations for addition of SRID to a WKT string.
https://r-spatial.github.io/sf/reference/st_as_text.html
Instead, use st_as_sf(wkt=) to set the new (old) geometry.
st_as_sf(geo_df, wkt = "geom", crs = 4326)
I'd like to create a BigQuery table with geoJSON files, despite the geoJSONis an accepted format in BQ (NEWLINE_DELIMITED_JSON) and bq_fields specification, or something coercible to it (like a data frame) the function bq_table_create() of the bigrquery package doesn't work. In my example below the output error is Erro: Unsupported type: list:
library(sf)
library(bigrquery)
library(DBI)
library(googleAuthR)
library(geojsonsf)
library(geojsonR)
# Convert shapefile to geoJSON
stands_sel <- st_read(
"D:/Dropbox/Stinkbug_Ml_detection_CMPC/dashboard/v_08_CMPC/sel_stands_CMPC.shp")
# Open as geoJSON
geo <- sf_geojson(stands_sel)
# Convert geoJSON to data frame
geo_js_df <- as.data.frame(geojson_wkt(geo))
str(geo_js_df)
# 'data.frame': 2 obs. of 17 variables:
# $ SISTEMA_PR: chr "MACRO ESTACA - EUCALIPTO" "SEMENTE - EUCALIPTO"
# $ ESPECIE : chr "SALIGNA" "DUNNI"
# $ ID_UNIQUE : chr "BARBANEGRA159A" "CAMPOSECO016A"
# $ CICLO : num 2 1
# $ LOCALIDADE: chr "BARRA DO RIBEIRO" "DOM FELICIANO"
# $ ROTACAO : num 1 1
# $ CARACTER_1: chr "Produtivo" "Produtivo"
# $ VLR_AREA : num 8.53 28.07
# $ ID_REGIAO : num 11 11
# $ CD_USO_SOL: num 2433 9053
# $ DATA_PLANT: chr "2008/04/15" "2010/04/15"
# $ ID_PROJETO: chr "002" "344"
# $ CARACTERIS: chr "Plantio Comercial" "Plantio Comercial"
# $ PROJETO : chr "BARBA NEGRA" "CAMPO SECO"
# $ ESPACAMENT: chr "3.00 x 2.50" "3.5 x 2.14"
# $ CD_TALHAO : chr "159A" "016A"
# $ geometry :List of 2
# ..$ : 'wkt' chr "MULTIPOLYGON (((-51.2142 -30.3517,-51.2143 -30.3518,-51.2143 -30.3518,-51.2143 -30.3519,-51.2143 -30.3519,-51.2"| __truncated__
# ..$ : 'wkt' chr "MULTIPOLYGON (((-52.3214 -30.4271,-52.3214 -30.4272,-52.3214 -30.4272,-52.3215 -30.4272,-52.3215 -30.4272,-52.3"| __truncated__
# - attr(*, "wkt_column")= chr "geometry"
# Insert information inside BQ
bq_conn <- dbConnect(bigquery(),
project = "my-project",
use_legacy_sql = FALSE
)
# First create the table
players_table = bq_table(project = "my-project", dataset = "stands_ROI_2021", table = "CF_2021")
bq_table_create(x = players_table, fields = as_bq_fields(geo_js_df))
Erro: Unsupported type: list
You can upload data frame with a list-type column on BigQuery by using bq_table_upload() syntax. Try this on your script instead of bq_table_create(),
bq_table_upload(players_table, geo_js_df)
For your reference, I tried this on my end using this sample data with a list-type column:
d <- data.frame(id = 1:2,
name = c("Jon", "Mark"),
children = I(list(c("Mary", "James"),
c("Greta", "Sally")))
)
R console:
Created BQ table:
EDIT:
As per this documentation, FeatureCollection is not yet supported in BigQuery, however there is an ongoing feature request you can find here. Workaround is to convert the GeoJson file to BigQuery new-line-delimited JSON before converting it to dataframe.
To convert GeoJson file to BigQuery new-line-delimited JSON, follow these steps:
Install node.js.
Add packages:
npm install fs JSONStream line-input-stream yargs
Clone the github repository:
git clone https://github.com/mentin/geoscripts.git
Change directory:
cd geoscripts/geojson2bq/
Convert GeoJson file to BigQuery new-line-delimited JSON:
node geojson2bqjson.js sel_stands.geojson > out.json
Using the new-line-delimited JSON file, convert this to dataframe in the R console, then use bq_table_upload() to upload the data to BigQuery.
library(bigrquery)
library(dplyr)
library(tidyverse)
library(jsonlite)
out <- stream_in(file('out.json'))
projectid<-"my-project"
datasetid<-"my-dataset"
bq_conn <- dbConnect(bigquery(),
project = projectid,
dataset = datasetid,
use_legacy_sql = FALSE)
players_table = bq_table(project = "my-project", dataset = "my-dataset", table = "CF_2021_test5")
bq_table_upload(players_table, out)
bq_table_download(players_table)
R console:
BigQuery table:
I have ASCII files with data separated by $ signs.
There are 23 columns in the data, the first row is of column names, but there is inconsistency between the line endings, which causes R to import the data improperly, by shift the data left-wise with respect to their columns.
Header line:
ISR$CASE$I_F_COD$FOLL_SEQ$IMAGE$EVENT_DT$MFR_DT$FDA_DT$REPT_COD$MFR_NUM$MFR_SNDR$AGE$AGE_COD$GNDR_COD$E_SUB$WT$WT_COD$REPT_DT$OCCP_COD$DEATH_DT$TO_MFR$CONFID$REPORTER_COUNTRY
which does not end with a $ sign.
First row line:
7215577$8135839$I$$7215577-0$20101011$$20110104$DIR$$$67$YR$F$N$220$LBS$20110102$CN$$N$Y$UNITED STATES$
Which does end with a $ sign.
My import command:
read.table(filename, header=TRUE, sep="$", comment.char="", header=TRUE, quote="")
My guess is that the inconsistency between the line endings causes R to think that the records have one column more than the header, thus making the first column as a row.names column, which is not correct. Adding the specification row.names=NULL does not fix the issue.
If I manually add a $ sign in the file the problem is solved, but this is infeasible as the issue occurs in hundreds of files. Is there a way to specify how to read the header line? Do I have any alternative?
Additional info: the headers change across different files, so I cannot set my own vector of column names
Create a dummy test file:
cat("ISR$CASE$I_F_COD$FOLL_SEQ$IMAGE$EVENT_DT$MFR_DT$FDA_DT$REPT_COD$MFR_NUM$MFR_SNDR$AGE$AGE_COD$GNDR_COD$E_SUB$WT$WT_COD$REPT_DT$OCCP_COD$DEATH_DT$TO_MFR$CONFID$REPORTER_COUNTRY\n7215577$8135839$I$$7215577-0$20101011$$20110104$DIR$$$67$YR$F$N$220$LBS$20110102$CN$$N$Y$UNITED STATES$",
file="deleteme.txt",
"\n")
Solution using gsub:
First read the file as text and then edit its content:
file_path <- "deleteme.txt"
fh <- file(file_path)
file_content <- readLines(fh)
close(fh)
Either add a $ at the end of header row:
file_content[1] <- paste0(file_content, "$")
Or remove $ from the end of all rows:
file_content <- gsub("\\$$", "", file_content)
Then we write the fixed file back to disk:
cat(paste0(file_content, collapse="\n"), file=paste0("fixed_", file_path), "\n")
Now we can read the file:
df <- read.table(paste0("fixed_", file_path), header=TRUE, sep="$", comment.char="", quote="", stringsAsFactors=FALSE)
And get the desired structure:
str(df)
'data.frame': 1 obs. of 23 variables:
$ ISR : int 7215577
$ CASE : int 8135839
$ I_F_COD : chr "I"
$ FOLL_SEQ : logi NA
$ IMAGE : chr "7215577-0"
$ EVENT_DT : int 20101011
$ MFR_DT : logi NA
$ FDA_DT : int 20110104
$ REPT_COD : chr "DIR"
$ MFR_NUM : logi NA
$ MFR_SNDR : logi NA
$ AGE : int 67
$ AGE_COD : chr "YR"
$ GNDR_COD : logi FALSE
$ E_SUB : chr "N"
$ WT : int 220
$ WT_COD : chr "LBS"
$ REPT_DT : int 20110102
$ OCCP_COD : chr "CN"
$ DEATH_DT : logi NA
$ TO_MFR : chr "N"
$ CONFID : chr "Y"
$ REPORTER_COUNTRY: chr "UNITED STATES "
I am trying to find a way to retrieve data from Harvard Dataverse website through R. I am using "dataverse" and "dvn" packages, among others. Many of the data files end with ".tab", although they are not formatted as normal tab-delimited text.
I have done this:
library(dataverse)
## 01. Using the dataverse server and making a search
Sys.setenv("DATAVERSE_SERVER" ="dataverse.harvard.edu")
## 02. Loading the dataset that I chose, by url
doi_url <- "https://doi.org/10.7910/DVN/ZTCWYQ"
my_dataset <- get_dataset(doi_url)
## 03. Grabbing the first file of the dataset
## which is named "001_AppendixC.tab"
my_files <- my_dataset$files$label
my_file <- get_file(my_files[1], doi_url)
AppendixC <- tempfile()
writeBin(my_file, AppendixC)
read.table(AppendixC)
> Error in scan(file = file, what = what, sep = sep, quote = quote, dec = dec, :
> line 1 did not have 2 elements
> In addition: Warning message:
> In read.table(AppendixC) :
> line 1 appears to contain embedded nulls
Any hint?
The problem is that dataverse::get_file() returns the file in a raw binary format. The easiest way to load it into memory is to write it to a tempfile with writeBin() and then read that file with the appropriate import/read function.
Here is a function that should automagically read it into memory:
# Uses rio, which automatically chooses the appropriate import/read
# function based on file type.
library(rio)
install_formats() # only needs to run once after
# pkg installation
load_raw_file <- function(raw, type) {
match.arg(
arg = type,
choices = c(
"csv", "tab", "psc", "tsv", "sas7bdat",
"sav", "dta", "xpt", "por", "xls", "xlsx",
"R", "RData", "rda", "rds", "rec", "mtb",
"feather", "csv.gz", "fwf"
)
)
tmp <- tempfile(fileext = paste0(".", type))
writeBin(as.vector(raw), tmp)
out <- import(tmp)
unlink(tmp)
out
}
Let's try it out with your file, which is a an excel file.
library(dataverse)
raw <- get_file(
"001_AppendixC.tab",
"https://doi.org/10.7910/DVN/ZTCWYQ"
)
data <- load_raw_file(raw, "xlsx")
And look at the data:
str(data)
> 'data.frame': 132 obs. of 17 variables:
> $ Country : chr "Afghanistan" "Albania" "Algeria" "Angola" ...
> $ UN_9193 : chr "37.4" "7.7" "9.1" "65.400000000000006" ...
> $ UN_9901 : chr "46.1" "7.2" "10.7" "50" ...
> $ UN_0709 : chr "24.6" "9.6999999999999993" "7.5" "23.7" ...
> $ UN_1416 : chr "23" "4.9000000000000004" "4.5999999999999996" "14" ...
> $ stu90_94 : chr "51.3" "37.200000000000003" "22.9" "52.9" ...
> $ stu98_02 : chr "54.7" "39.200000000000003" "23.6" "47.1" ...
> $ stu06_10 : chr "51.3" "23.1" "13.2" "29.2" ...
> $ stu12_16 : chr "40.9" "17.899999999999999" "11.7" "37.6" ...
> $ wast90_94: chr "11.5" "9.4" "7.1" "7.9" ...
> $ wast98_02: chr "13.4" "12.2" "3.1" "8.6999999999999993" ...
> $ wast06_10: chr "8.9" "9.4" "4.0999999999999996" "8.1999999999999993" ...
> $ wast12_16: chr "9.5" "6.2" "4.0999999999999996" "4.9000000000000004" ...
> $ UM1992 : chr "16.8" "3.7" "4.5" "22.6" ...
> $ UM2000 : chr "13.7" "2.6" "4" "21.7" ...
> $ UM2008 : chr "11" "1.8" "2.9" "19.2" ...
> $ UM2015 : chr "9.1" "1.4" "2.6" "15.7" ...